pchickey edited PR #6556:
<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->
pchickey updated PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasiwe knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite.Features
wasmtime-wasiPreview 2 support is now built on top oftokio, and the host implementations for streams and pollable always useasyncRust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream,HostOutputStream, andHostPollablemechanisms are the extension mechanism forwasmtime_wasi, as well as other crates, to provide implementations of thewasi:io/streams.{input-stream,output-stream}andwasi:poll/poll.pollableresources.Wasmtime-wasi provides a built in
AsyncReadStreamto wrap anytokio::io::AsyncReadto implHostInputStream,AsyncWriteStreamto adaptAsyncWritetoHostOutputStream, and aAsyncFdStreamwrapper forAsynFdthat provides both stream impls (only available forcfg(unix)).Additionally, we have provided an implementation that correctly handles waiting on
stdinreadines sSync and Async uses
Despite using async for its implementation,
wasmtime-wasiwill still work for synchronous Wasmtime embeddings. A newtest-programssuitewasi-preview2-components-syncis identical towasi-preview2-componentsin all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker, rather thanwasmtime_wasi::command::add_to_linkerwhich requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on.The
wasi-commonPreview 1poll_oneoffimplementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that it was too challenging
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasiwe knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite.Features
wasmtime-wasiPreview 2 support is now built on top oftokio, and the host implementations for streams and pollable always useasyncRust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream,HostOutputStream, andHostPollablemechanisms are the extension mechanism forwasmtime_wasi, as well as other crates, to provide implementations of thewasi:io/streams.{input-stream,output-stream}andwasi:poll/poll.pollableresources.Wasmtime-wasi provides a built in
AsyncReadStreamto wrap anytokio::io::AsyncReadto implHostInputStream,AsyncWriteStreamto adaptAsyncWritetoHostOutputStream, and aAsyncFdStreamwrapper forAsynFdthat provides both stream impls (only available forcfg(unix)).Additionally, we have provided an implementation that correctly handles waiting on
stdinreadines sSync and Async uses
Despite using async for its implementation,
wasmtime-wasiwill still work for synchronous Wasmtime embeddings. A newtest-programssuitewasi-preview2-components-syncis identical towasi-preview2-componentsin all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker, rather thanwasmtime_wasi::command::add_to_linkerwhich requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on.The
wasi-commonPreview 1poll_oneoffimplementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasiwe knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite.Features
wasmtime-wasiPreview 2 support is now built on top oftokio, and the host implementations for streams and pollable always useasyncRust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream,HostOutputStream, andHostPollabletraits are used to implement thewasi:io/streams.{input-stream,output-stream}andwasi:poll/poll.pollableresources.
Host{Input, Output}Streamdesign
HostPollabledesignImplementations
wasmtime-wasiprovides a structAsyncReadStreamwhich wraps atokio::io::AsyncReadto provide an impl ofHostInputStream,AsyncWriteStreamto wrapsAsyncWriteto provideHostOutputStream, and, incfg(unix),AsyncFdStreamwrapstokio::io::unix::AsynFdto provide both read and write stream impls.
AsyncReadStreamandAsyncWriteStreamwill manage their own buffer in order for theHostInputStream::readandHostOutputStream::writemethods to be non-blocking. These methods call theAsyncRead::poll_readandAsyncWrite::poll_writemethods with a noop-waker Context (for now, this trivial implementation has been copied out ofstdsources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdinreadiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream, allows tokio to register stdin withepoll(7). Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()returns astruct Stdinwhich takes a lock on that singleton in theHostImputStream::{read, ready}methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasiwill still work for synchronous Wasmtime embeddings. A newtest-programssuitewasi-preview2-components-syncis identical towasi-preview2-componentsin all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker, rather thanwasmtime_wasi::command::add_to_linkerwhich requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on.The
wasi-commonPreview 1poll_oneoffimplementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.
dicej created PR review comment:
I wonder if there's a way to optimize this by reusing FuturesUnordered here. From the docs:
This structure is optimized to manage a large number of futures. Futures managed by FuturesUnordered will only be polled when they generate wake-up notifications. This reduces the required amount of work needed to poll large numbers of futures.
The implementation of
FuturesUnorderedinterposes a custom waker which precisely tracks which futures have been woken, queuing them up to be polled at the next opportunity, and not wasting time polling the futures which haven't been woken. This can make a big difference when the number of pollables is large.
dicej created PR review comment:
I should mention that we wouldn't get the full benefit of
FuturesUnorderedhere given the one-off nature ofpoll-oneoff, so it might not be worth it until/unless we replacepoll-oneoffwith a resource that's intended to be polled repeatedly.
alexcrichton created PR review comment:
I definitely don't disagree about the benefits of
FuturesUnordered, but I would personally say we should start off simpler here and see how things go. My guess is that a large number of futures would have many other inefficiencies other than the polling behavior here, and in the meantime, at least to me personally, I feel that a manual "poll the list" is easier to understand than theFuturesUnorderedabstraction
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasiwe knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasiPreview 2 support is now built on top oftokio, and the host implementations for streams and pollable always useasyncRust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream,HostOutputStream, andHostPollabletraits are used to implement thewasi:io/streams.{input-stream,output-stream}andwasi:poll/poll.pollableresources.
Host{Input, Output}Streamdesign
input-streamandoutput-streamresources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStreamhas just two methods:fn read(&mut self, ...)andasync fn ready(&mut self). The most important invariant is thatreadmust be non-blocking, andreadymust block until the stream is ready for reading.
HostPollabledesignThe trick to
HostPollableis to create a RustFutureindicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>to the resource itself. @alexcrichton pointed out the right design here:HostPollableis an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::readymethod.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Implementations
wasmtime-wasiprovides a structAsyncReadStreamwhich wraps atokio::io::AsyncReadto provide an impl ofHostInputStream,AsyncWriteStreamto wrapsAsyncWriteto provideHostOutputStream, and, incfg(unix),AsyncFdStreamwrapstokio::io::unix::AsynFdto provide both read and write stream impls.
AsyncReadStreamandAsyncWriteStreamwill manage their own buffer in order for theHostInputStream::readandHostOutputStream::writemethods to be non-blocking. These methods call theAsyncRead::poll_readandAsyncWrite::poll_writemethods with a noop-waker Context (for now, this trivial implementation has been copied out ofstdsources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdinreadiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream, allows tokio to register stdin withepoll(7). Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()returns astruct Stdinwhich takes a lock on that singleton in theHostImputStream::{read, ready}methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasiwill still work for synchronous Wasmtime embeddings. A newtest-programssuitewasi-preview2-components-syncis identical towasi-preview2-componentsin all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker, rather thanwasmtime_wasi::command::add_to_linkerwhich requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on.The
wasi-commonPreview 1poll_oneoffimplementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.
pchickey has marked PR #6556 as ready for review.
pchickey requested fitzgen for a review on PR #6556.
pchickey requested wasmtime-core-reviewers for a review on PR #6556.
pchickey requested wasmtime-default-reviewers for a review on PR #6556.
pchickey requested alexcrichton for a review on PR #6556.
pchickey requested dicej for a review on PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasiwe knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasiPreview 2 support is now built on top oftokio, and the host implementations for streams and pollable always useasyncRust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream,HostOutputStream, andHostPollabletraits are used to implement thewasi:io/streams.{input-stream,output-stream}andwasi:poll/poll.pollableresources.
Host{Input, Output}Streamdesign
input-streamandoutput-streamresources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStreamhas just two methods:fn read(&mut self, ...)andasync fn ready(&mut self). The most important invariant is thatreadmust be non-blocking, andreadymust block until the stream is ready for reading.
HostPollabledesignThe trick to
HostPollableis to create a RustFutureindicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>to the resource itself. @alexcrichton pointed out the right design here:HostPollableis an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::readymethod.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Implementations
wasmtime-wasiprovides a structAsyncReadStreamwhich wraps atokio::io::AsyncReadto provide an impl ofHostInputStream,AsyncWriteStreamto wrapsAsyncWriteto provideHostOutputStream, and, incfg(unix),AsyncFdStreamwrapstokio::io::unix::AsynFdto provide both read and write stream impls.
AsyncReadStreamandAsyncWriteStreamwill manage their own buffer in order for theHostInputStream::readandHostOutputStream::writemethods to be non-blocking. These methods call theAsyncRead::poll_readandAsyncWrite::poll_writemethods with a noop-waker Context (for now, this trivial implementation has been copied out ofstdsources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdinreadiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream, allows tokio to register stdin withepoll(7). Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()returns astruct Stdinwhich takes a lock on that singleton in theHostImputStream::{read, ready}methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasiwill still work for synchronous Wasmtime embeddings. A newtest-programssuitewasi-preview2-components-syncis identical towasi-preview2-componentsin all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker, rather thanwasmtime_wasi::command::add_to_linkerwhich requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on.The
wasi-commonPreview 1poll_oneoffimplementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.Bindings traits
After living with the bindings generated underneath
crate::preview2::wasiandcrate::preview2::wasi::commandfor a while, I decided that the interface traits belong undercrate::preview2::bindingsand command belongs undercrate::preview2::command.Beyond that code motion, we now generate async traits for the
wasi:io/streamsandwasi:poll/pollinterfaces, and sync traits for everything else. If you want to use the bindings from an async embedding (i.e. with an async Config), thecrate::preview2::bindings::{interface}::add_to_linkerwill do it, and if you want to use them from a synchronous embedding, usecrate::preview2::bindings::sync::{interface}::add_to_linker.
alexcrichton submitted PR review:
Looking reasonable so far, but the high level things which I think need more looking into are:
- The
block_in_placeplace function I don't think can ever be used correctly since it's used in contexts that shouldn't block for the express purpose of blocking.- Buffering internally I think needs to all be "capped" so it doesn't buffer up too much data on repeated non-blocking calls to write/read
- I only realized this later in review, but all
readyfutures need to be "cancel safe" where the internal objects are all left in a consistent state if the operation is cancelled.
alexcrichton submitted PR review:
Looking reasonable so far, but the high level things which I think need more looking into are:
- The
block_in_placeplace function I don't think can ever be used correctly since it's used in contexts that shouldn't block for the express purpose of blocking.- Buffering internally I think needs to all be "capped" so it doesn't buffer up too much data on repeated non-blocking calls to write/read
- I only realized this later in review, but all
readyfutures need to be "cancel safe" where the internal objects are all left in a consistent state if the operation is cancelled.
alexcrichton created PR review comment:
I think this'd be good to have a doc-block explaining when it's intended to be used, basically exclusively as a bridge between the async implementation and the desire to have a synchronous implementation instead.
alexcrichton created PR review comment:
One possible update in the future is to use
bytes::Byteshere since we're already buying in tokio stuff, but this also may not be used that heavily in which case it wouldn't matter too too much.
alexcrichton created PR review comment:
I think this is fine for now, but this I think is violating the contract of
readwhere it should be non-blocking right? I think thatblock_in_placewill correctly not block other tasks but from the point of view of wasm this will still be blocking and "possibly taking awhile" since the host call is blocked in reading.Fixing this though I think will be somewhat involved and require similar adapters for async read/write where a
readqueues up a request for a read and then duringasync fn readyit asynchronously blocks on a thread performing the read.That gets sort of funky though because then you have to say that a file is ready for reading but not actually queue up a read until a read is requested (or something like that).
Anyway that's just an idea, but probably not a good one. Something will need to change here though I think either at the WASI spec level or here or something though to ensure it's all consistent.
alexcrichton created PR review comment:
If you get a chance I think it'd be good to open an issue for avoiding this
_internal_iowrapper module in the future. It seems like it's probably here to avoid clashes in twobindgen!macros or you're hiding stuff generated bybindgen!, but we can probably add more options to the macro in the future to fix that and avoid the need.
alexcrichton created PR review comment:
I might recommend
self.buffer.drain(..read_from_buffer)for this perhaps
alexcrichton created PR review comment:
I think this'll want to check the length of
self.bufferand only block inawaitif it's zero
alexcrichton created PR review comment:
Like above I think this would be good to have a doc block, but this ties in with my concerns on the file implementation in that I'm not sure we can actually support a function such as this since it's used in context which aren't supposed to block but this causes them to block?
alexcrichton created PR review comment:
I think the "fix" here might be to ignore this error actually? If an error happens I think the only error is disconnection, which means that all future attempts to write will also continue to see the same error, so this is otherwise swallowing the bytes that were previously written and reported as written but they ended up not being received in the end.
alexcrichton created PR review comment:
One thing I might recommend to simplify this a bit is to remove the
Writablestate since because this is a spsc channel once a reservation is done to acquire anOwnedPermitthen it'll forever be writable immediately by theInputStream. Effectively inreadyafterreserve_ownedI think it can callreleaseto avoid the need to store channel-or-permit.
alexcrichton created PR review comment:
I think this may not be quite right because technically
readalways succeeds, right?
alexcrichton created PR review comment:
I think it'd be worth mirroring what files/sockets do in this case. Well ok not files since they just extend the file but for sockets I think it's an EPIPE return (or equivalent thereof)
alexcrichton created PR review comment:
I think it'd be good to document what
boundhere is perhaps? I think that this ends up being a bound on "chunks" though rather than bytes which is probably what would be expected? I think it's ok to switch from chunks-to-bytes as a follow-up though.
alexcrichton created PR review comment:
I think that this is the method where the
boundshould be checked from above because otherwise this can infinitely be called and it'll never return "wouldblock" and it'll just keep buffering data.
alexcrichton created PR review comment:
One way to get rid of
Clone,Arc, andMutexhere might be to require users of this to lookup the pipe from within the original table that it was inserted into? Although I suppose that runs afoul of WASI closing it which would then drop this, so perhaps not. It feels a bit weird though to haveClonefor this type which shouldn't really be cloned and rather it's intended to be more of a "one end writes one end reads" sort of style.Anyway fine to handle this later.
alexcrichton created PR review comment:
Could you have a comment for why there's a hash map involved here?
(basically we can only mutably borrow each table entry at most once so all requests for the same table entry must be "canonicalized")
alexcrichton created PR review comment:
Unlike my thinking way up above about
block_in_placefor read/write on files I do think that it's the correct API to use here. This is fundamentally a blocking API (even as WASI defines it) so this is saying "hey tokio go do something as necessary as I'm about to block".In the context of an async server though since it locks up a whole thread where the "correct" thing would be to leave this as an
async fnwhich does aspawn_blocking. Given that I'm not sure how best to handle this. From a CLI point of view either way works, and from an async server point of view the actual real filesystem probably isn't used at all.Although are the underlying file objects here trait-objects as opposed to "always
cap_std::fs::File"? If that's the case then the trait-object I would presume wants async methods, so the async-ness would need to be propagated here to the bindings?Hm ok I started out this comment thinking "ok
block_in_placeis right here" but now I'm less certain...
alexcrichton created PR review comment:
I don't think that this is quite right for a few reasons:
- I think the main intention here is that if
read_from_bufferis 0 that this method returns EWOULDBLOCK instead of success.- Also a read succeeding with 0 typically means EOF, right? So this shouldn't ever return 0 unless it also returns
StreamState::Closed?
alexcrichton created PR review comment:
Thinking about this again, I'm not sure that this is actually the desired semantics of
poll_oneoffthat we want? At least with systems like epoll/select/poll/etc when you wait for a set of operations the wait doesn't fail if one of the operations fails, but rather the wait says "that operation is ready" and when you got to get the result it returns the failure.In that sense this may actually want to change the
readyfunction to beasync fn ready(&self) { ... }with no return value and then if an error is to be returned it's buffered up within the object to get returned byreadorwriteafterwards?
alexcrichton created PR review comment:
Hm ok so this makes me realize that my idea for "just record the table index" is wrong, although subtly so. Basically what you can do is create a
pollableand then close the original stream and that would probably cause a panic here.One possible solution would be to prevent closure while there are active
pollableobjects. Looking into a resource-based future though that's not going to work since there's no hook to preventresource.dropon resources.Instead this may need to record "this is being polled on" so when it's closed we don't actually fully close the object but instead put it in an area of "this is only here for pollable" and then that gets cleaned up when the pollable is closed. (or something like that, not entirely sure)
alexcrichton created PR review comment:
I commented on this above for something else as well, but I think the implementation here should be
Ok(())instead of pending
alexcrichton created PR review comment:
I think it might be good to split this impl block to a "sink" type rather than hanging it on
EmptyStreamsince for writing I think it's reasonable to say that either all writes are eaten without doing anything or no writes are accepted and it always returnsOk(0)
alexcrichton created PR review comment:
Should this return
StreamState::Closed?
alexcrichton created PR review comment:
Same comments as way above, but I think this which is supposed to be non-blocking is actually blocking now?
alexcrichton created PR review comment:
Ah this is where I think the I/O split I mentioned above would work better. For example bytes may have been successfully written into the buffer but then ignored due to this error happening. I think it'd be better to split apart the operations so that if any bytes are ever succesfully read they're guaranteed to be read from this stream.
alexcrichton created PR review comment:
I think this may not be required due to the usage of
dest.writeabove? I think that may update the buffer in-place to be shorter.
alexcrichton created PR review comment:
This doesn't feel quite right to me in that this shouldn't continuously ressize the buffer but instead if the buffer is non-empty then it's always ready for reading but a read is only done if the buffer is empty (in which case we could for example know it's already been initialized)
alexcrichton created PR review comment:
This is a bit tricky because the futures returned from
readyneed to all be "cancel safe", so if there's pending data in this buffer then it'll get lost if theawaitpoint below is cancelled.
alexcrichton created PR review comment:
Sort of like above with pipes but I think this'll want a form of backpressure to avoid the internal buffer from growing too large
alexcrichton created PR review comment:
The error here is tricky to handle because data has been buffered up from the beginning of the function but this is also reporting an error, so if the application re-wrote the data then it'd be written twice
alexcrichton created PR review comment:
In terms of "cancel-safety" this needs to be aware of partial writes that succeed, so I don't think this will be able to use
write_allbut instead will have toloopand incrementally update the internal buffer.
alexcrichton created PR review comment:
Or put another way, each
bufprovided to this method I think should become a separate message on the channel and then at most oneVec<u8>is buffered internally.
alexcrichton created PR review comment:
Alternatively files here might need
Arc<Mutex<...>>or something similar. For example areadwould spawn a task to do the actual read and then the non-blockingreadis basically checking if the task is done, andreadyis additionally checking if the task is done. That I think would make the append stream below make more sense since an append would spawn a task doing the write and then further writes/ready would wait for that first task to be done.
alexcrichton created PR review comment:
(also perhaps check
self.stateand skip therecvif it's closed already, but that's more minor since it'll still work otherwise without that)
alexcrichton created PR review comment:
Or put another way, I would expec this type to be "always ready"
alexcrichton created PR review comment:
Although personally I think it's also reasonable to instead structure this entire method as:
if self.buffer.is_empty() { // do I/O } else { // do buffer stuff but no I/O }to avoid mixing and matching the I/O branches
alexcrichton created PR review comment:
Or put another way (again, sorry), the
TrySendError::Fullcase I think should return EWOULDBLOCK so callers know to go callpoll_oneoffafterwards.
dicej submitted PR review:
Looks like a great start overall.
Echoing Alex's concerns about
block_in_place:
- We should never use it in any function which must be non-blocking from the guest's perspective.
- We should probably also not use it in any function which _is_ blocking from the guest's perspective since it blocks the Wasmtime fiber, silently inhibits the composition of futures (e.g. for timeouts), and also blocks the embedder in a way it might not expect.
So we can use
spawn_blockinginstead, and store the future as part of the stream, but at that point we're kind of reinventingtokio::fs. Thetokiofolks have already provided a battle-tested implementation, andtokio::fs::FileimplementsAsyncReadandAsyncWrite, so I think we should reuse it wherever possible. I realize there are some cases where we need to usecap-stdto enforce sandboxing, but oncecap-stdgives us a file handle, we should be able to convert it to atokio::fs::Fileand use that to handle the details of bridging the async and sync worlds. Does that sounds plausible?Regarding buffers: I don't think we should ever need to grow buffers beyond their original sizes (and certainly not without bound). I would expect that we would flush (for writes) or fill (for reads) a buffer before moving on to the next one (or reusing it without necessarily changing its size). If you believe there's a scenario where extending a buffer is unavoidable (or at least desirable), I'd like to have a conversation to understand that better.
Generally speaking, I would expect that neither
HostInputStream::readnorHostOutputStream::writeimplementations would do _any_ I/O and would instead leave that toready, which would fill or drainself.buffer, respectively. That would avoid the need for the no-op waker and any kind ofblock_in_placetrickery. Or put another way, I/O should only be done in an async context where a real waker is available, thus guaranteeing wake events make it back topoll_oneoff.Hope that's somewhat clear. Happy to discuss in more detail as necessary.
dicej submitted PR review:
Looks like a great start overall.
Echoing Alex's concerns about
block_in_place:
- We should never use it in any function which must be non-blocking from the guest's perspective.
- We should probably also not use it in any function which _is_ blocking from the guest's perspective since it blocks the Wasmtime fiber, silently inhibits the composition of futures (e.g. for timeouts), and also blocks the embedder in a way it might not expect.
So we can use
spawn_blockinginstead, and store the future as part of the stream, but at that point we're kind of reinventingtokio::fs. Thetokiofolks have already provided a battle-tested implementation, andtokio::fs::FileimplementsAsyncReadandAsyncWrite, so I think we should reuse it wherever possible. I realize there are some cases where we need to usecap-stdto enforce sandboxing, but oncecap-stdgives us a file handle, we should be able to convert it to atokio::fs::Fileand use that to handle the details of bridging the async and sync worlds. Does that sounds plausible?Regarding buffers: I don't think we should ever need to grow buffers beyond their original sizes (and certainly not without bound). I would expect that we would flush (for writes) or fill (for reads) a buffer before moving on to the next one (or reusing it without necessarily changing its size). If you believe there's a scenario where extending a buffer is unavoidable (or at least desirable), I'd like to have a conversation to understand that better.
Generally speaking, I would expect that neither
HostInputStream::readnorHostOutputStream::writeimplementations would do _any_ I/O and would instead leave that toready, which would fill or drainself.buffer, respectively. That would avoid the need for the no-op waker and any kind ofblock_in_placetrickery. Or put another way, I/O should only be done in an async context where a real waker is available, thus guaranteeing wake events make it back topoll_oneoff.Hope that's somewhat clear. Happy to discuss in more detail as necessary.
dicej created PR review comment:
According to the docs, it should be considered closed when
readreturns 0 whendest.len() > 0.
dicej created PR review comment:
Nits: "poisoned" is misspelled, and per https://doc.rust-lang.org/std/result/enum.Result.html#recommended-message-style, this should probably read "mutex should not be poisoned".
dicej created PR review comment:
In addition to separating the buffer and I/O cases, you can put the whole thing in a loop that runs until either
destis full or the buffer is empty and I/O would block.
dicej created PR review comment:
This comment doesn't seem to be relevant anymore.
elliottt updated PR #6556.
pchickey updated PR #6556.
elliottt updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
pchickey submitted PR review.
pchickey created PR review comment:
Yeah, we'd have to do some challenging stuff to destroy the ctx, which is harder because the methods need both the ctx and table via WasiView, so we decided against changing this now.
We really only expect this impl to be used for tests.
pchickey submitted PR review.
pchickey created PR review comment:
An io error that occurs is buffered up, but we wanted to leave it open for
ready()to trap execution if something is horribly wrong.
elliottt updated PR #6556.
pchickey submitted PR review.
pchickey created PR review comment:
Yes this is a pretty significant complication. Right now we have decided to punt on it by adding a big warning in the source code, land this PR, then see if any of the assumptions involved change as we switch over to resources, and then fix it. It would be a crash vector for the whole runtime, but it would also require some pretty major work, so I want to defer it to a future PR. We will fix it before we mark the preview2 implementation as production ready.
elliottt updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
dicej submitted PR review:
Looks like a great start overall.
Echoing Alex's concerns about
block_in_place:
- We should never use it in any function which must be non-blocking from the guest's perspective.
- We should probably also not use it in any function which _is_ blocking from the guest's perspective since it blocks the Wasmtime fiber, silently inhibits the composition of futures (e.g. for timeouts), and also blocks the embedder in a way it might not expect.
So we can use
spawn_blockinginstead, and store the future as part of the stream, but at that point we're kind of reinventingtokio::fs. Thetokiofolks have already provided a battle-tested implementation, andtokio::fs::FileimplementsAsyncReadandAsyncWrite, so I think we should reuse it wherever possible. I realize there are some cases where we need to usecap-stdto enforce sandboxing, but oncecap-stdgives us a file handle, we should be able to convert it to atokio::fs::Fileand use that to handle the details of bridging the async and sync worlds. Does that sound plausible?Regarding buffers: I don't think we should ever need to grow buffers beyond their original sizes (and certainly not without bound). I would expect that we would flush (for writes) or fill (for reads) a buffer before moving on to the next one (or reusing it without necessarily changing its size). If you believe there's a scenario where extending a buffer is unavoidable (or at least desirable), I'd like to have a conversation to understand that better.
Generally speaking, I would expect that neither
HostInputStream::readnorHostOutputStream::writeimplementations would do _any_ I/O and would instead leave that toready, which would fill or drainself.buffer, respectively. That would avoid the need for the no-op waker and any kind ofblock_in_placetrickery. Or put another way, I/O should only be done in an async context where a real waker is available, thus guaranteeing wake events make it back topoll_oneoff.Hope that's somewhat clear. Happy to discuss in more detail as necessary.
elliottt updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
alexcrichton submitted PR review:
Nice :+1:
For the WIT changes, those are from the upstream proposals? Or local changes made temporarily for this PR?
alexcrichton submitted PR review:
Nice :+1:
For the WIT changes, those are from the upstream proposals? Or local changes made temporarily for this PR?
alexcrichton created PR review comment:
I find the semantics of this type a bit dubious, but is this necessary? Would it be possible to remove this?
alexcrichton created PR review comment:
Given the semantics of that this stream is never ready for reading, would a better default be the
ClosedInputStream?
alexcrichton created PR review comment:
Should this case be
unreachable!since if the state is "ready" then the task should always be trying to receive one buffer?
alexcrichton created PR review comment:
Instead of an
Optionhere perhaps addClosedtoWriteState?
alexcrichton created PR review comment:
I think that this may always want to be
Ok(())sincereadnever blocks?
alexcrichton created PR review comment:
Had a good chuckle reading this :)
Regardless I agree we can't make this truly async and sync close here is the right thing to do IMO
pchickey submitted PR review.
pchickey created PR review comment:
Correct.
pchickey submitted PR review.
pchickey created PR review comment:
yeah, we did it because it corresponded to one of the output streams but now that you point it out, its really not necessary at all
pchickey edited PR review comment.
pchickey edited PR review comment.
pchickey updated PR #6556.
pchickey submitted PR review.
pchickey created PR review comment:
correct- that was left over from a prior factoring of this code, but its unreachable now
pchickey submitted PR review.
pchickey created PR review comment:
we had it factored that way, then we were taking advantage of Option::take() so changing the repr made sense, but now I think I can switch it back...
pchickey edited PR review comment.
pchickey updated PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasiwe knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasiPreview 2 support is now built on top oftokio, and the host implementations for streams and pollable always useasyncRust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream,HostOutputStream, andHostPollabletraits are used to implement thewasi:io/streams.{input-stream,output-stream}andwasi:poll/poll.pollableresources.
Host{Input, Output}Streamdesign
input-streamandoutput-streamresources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStreamhas just two methods:fn read(&mut self, ...)andasync fn ready(&mut self). The most important invariant is thatreadmust be non-blocking, andreadymust block until the stream is ready for reading.
HostPollabledesignThe trick to
HostPollableis to create a RustFutureindicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>to the resource itself. @alexcrichton pointed out the right design here:HostPollableis an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::readymethod.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Implementations
wasmtime-wasiprovides a structAsyncReadStreamwhich wraps atokio::io::AsyncReadto provide an impl ofHostInputStream,AsyncWriteStreamto wrapsAsyncWriteto provideHostOutputStream, and, incfg(unix),AsyncFdStreamwrapstokio::io::unix::AsynFdto provide both read and write stream impls.
AsyncReadStreamandAsyncWriteStreamwill manage their own buffer in order for theHostInputStream::readandHostOutputStream::writemethods to be non-blocking. These methods call theAsyncRead::poll_readandAsyncWrite::poll_writemethods with a noop-waker Context (for now, this trivial implementation has been copied out ofstdsources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdinreadiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream, allows tokio to register stdin withepoll(7). Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()returns astruct Stdinwhich takes a lock on that singleton in theHostImputStream::{read, ready}methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasiwill still work for synchronous Wasmtime embeddings. A newtest-programssuitewasi-preview2-components-syncis identical towasi-preview2-componentsin all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker, rather thanwasmtime_wasi::command::add_to_linkerwhich requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on.The
wasi-commonPreview 1poll_oneoffimplementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.Bindings traits
After living with the bindings generated underneath
crate::preview2::wasiandcrate::preview2::wasi::commandfor a while, I decided that the interface traits belong undercrate::preview2::bindingsand command belongs undercrate::preview2::command.Beyond that code motion, we now generate async traits for the
wasi:io/streams,wasi:poll/poll, andwasi:filesystem/filesysteminterfaces, and sync traits for everything else. If you want to use the bindings from an async embedding (i.e. with an async Config), thecrate::preview2::bindings::{interface}::add_to_linkerwill do it, and if you want to use them from a synchronous embedding, usecrate::preview2::bindings::sync::{interface}::add_to_linker.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasiwe knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasiPreview 2 support is now built on top oftokio, and the host implementations for streams and pollable always useasyncRust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream,HostOutputStream, andHostPollabletraits are used to implement thewasi:io/streams.{input-stream,output-stream}andwasi:poll/poll.pollableresources.
Host{Input, Output}Streamdesign
input-streamandoutput-streamresources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStreamhas just two methods:fn read(&mut self, ...)andasync fn ready(&mut self). The most important invariant is thatreadmust be non-blocking, andreadymust block until the stream is ready for reading.Nonblocking IO is actually impossible on regular files, but we hide that detail from WASI programs
There's a huge lie in the description above: internally, wasmtime_wasi uses
InternalHost{Input,Output}Stream, which are enums that contain either aHost{Input,Output}Stream, or a privateFile{Input,Output}Stream. These file streams always perform a blocking read/write, whether or not the WASI program has called the nonblockingreador the blockingblocking_readmethod. And, any pollable associated with file read or write readiness always returns ready immediately.This is required because there is no way to implement streams on files with
epoll(7)(or on macos or windows using mio, afaik) based readiness -poll(2)will lie and always tell you files are ready for reading of writing, but the actual syscall may end up blocking the thread due to disk IO. And, rather than stick the work on a background task, we must wait until the OSread(2)orwrite(2)for a file has completed before returning from wasi stream (nonblocking) read or write, because the userlands in WASI expect (reasonably!) that any errors in regular file IO are reported synchronously.So, we put an escape hatch into wasmtime-wasi which is just for regular file IO. It
awaitsfor a tokiospawn_blockingto complete in both the blocking and non-blocking implementations of the stream reads and writes. Like all the other syscalls in the filesystem implementation, we need theasyncrust to put any blocking syscalls in aspawn_blocking, in order to not block the executor.We spent a the better part of a week trying to square this circle and this is the best we could come up to make files work like we expect, but also expose the right interfaces from wasmtime-wasi so that all other streams have the proper non-blocking semantics.
HostPollabledesignThe trick to
HostPollableis to create a RustFutureindicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>to the resource itself. @alexcrichton pointed out the right design here:HostPollableis an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::readymethod.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Note that we discovered a major design problem with the
TableEntryvariant that exposes a crash vector if the parent resource of a pollable is destroyed before the pollable itself. This is a big enough problem that we are setting it aside to be solved in a follow-up PR.Implementations
wasmtime-wasiprovides a structAsyncReadStreamwhich wraps atokio::io::AsyncReadto provide an impl ofHostInputStream,AsyncWriteStreamto wrapsAsyncWriteto provideHostOutputStream.
AsyncReadStreamandAsyncWriteStreamwill manage their own buffer in order for theHostInputStream::readandHostOutputStream::writemethods to be non-blocking. Each spawns a helper tokio::task to perform async operations in the background. This requires some buffering.Note, we discussed ensuring these background tasks are reaped properly - right now we believe that the implementations are correct and they will exit on their own when the foreground side gets dropped. However, in the future to implement e.g. splice and forward correctly, we may need to allow these tasks to (optionally!) live longer than the WASI program's execution.
Additionally, we have provided an implementation that correctly handles waiting on
stdinreadiness on both windows and unix. In Unix, this is built on top oftokio::os::unix::AsyncFd, allows tokio to register stdin withepoll(7). Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()returns astruct Stdinwhich takes a lock on that singleton in theHostInputStream::{read, ready}methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, the implementation of stdin is broken, and inheriting stdin will panic. We have a path to fixing it, but we are leaving that out of this PR, because it has been open for far too long, and far too much work is predicated on landing it.
Sync and Async uses
Despite using async for its implementation,
wasmtime-wasiwill still work for synchronous Wasmtime embeddings. A newtest-programssuitewasi-preview2-components-syncis identical towasi-preview2-componentsin all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker, rather thanwasmtime_wasi::command::add_to_linkerwhich requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on.The
wasi-commonPreview 1poll_oneoffimplementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.Bindings traits
After living with the bindings generated underneath
crate::preview2::wasiandcrate::preview2::wasi::commandfor a while, I decided that the interface traits belong undercrate::preview2::bindingsand command belongs undercrate::preview2::command.Beyond that code motion, we now generate async traits for the
wasi:io/streams,wasi:poll/poll, andwasi:filesystem/filesysteminterfaces, and sync traits for everything else. If you want to use the bindings from an async embedding (i.e. with an async Config), thecrate::preview2::bindings::{interface}::add_to_linkerwill do it, and if you want to use them from a synchronous embedding, usecrate::preview2::bindings::sync::{interface}::add_to_linker.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey has enabled auto merge for PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey merged PR #6556.
Last updated: Dec 13 2025 at 19:03 UTC