pchickey edited PR #6556:
<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->
pchickey updated PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasi
we knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite.Features
wasmtime-wasi
Preview 2 support is now built on top oftokio
, and the host implementations for streams and pollable always useasync
Rust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream
,HostOutputStream
, andHostPollable
mechanisms are the extension mechanism forwasmtime_wasi
, as well as other crates, to provide implementations of thewasi:io/streams.{input-stream,output-stream}
andwasi:poll/poll.pollable
resources.Wasmtime-wasi provides a built in
AsyncReadStream
to wrap anytokio::io::AsyncRead
to implHostInputStream
,AsyncWriteStream
to adaptAsyncWrite
toHostOutputStream
, and aAsyncFdStream
wrapper forAsynFd
that provides both stream impls (only available forcfg(unix)
).Additionally, we have provided an implementation that correctly handles waiting on
stdin
readines sSync and Async uses
Despite using async for its implementation,
wasmtime-wasi
will still work for synchronous Wasmtime embeddings. A newtest-programs
suitewasi-preview2-components-sync
is identical towasi-preview2-components
in all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker
, rather thanwasmtime_wasi::command::add_to_linker
which requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on
.The
wasi-common
Preview 1poll_oneoff
implementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched
. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that it was too challenging
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasi
we knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite.Features
wasmtime-wasi
Preview 2 support is now built on top oftokio
, and the host implementations for streams and pollable always useasync
Rust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream
,HostOutputStream
, andHostPollable
mechanisms are the extension mechanism forwasmtime_wasi
, as well as other crates, to provide implementations of thewasi:io/streams.{input-stream,output-stream}
andwasi:poll/poll.pollable
resources.Wasmtime-wasi provides a built in
AsyncReadStream
to wrap anytokio::io::AsyncRead
to implHostInputStream
,AsyncWriteStream
to adaptAsyncWrite
toHostOutputStream
, and aAsyncFdStream
wrapper forAsynFd
that provides both stream impls (only available forcfg(unix)
).Additionally, we have provided an implementation that correctly handles waiting on
stdin
readines sSync and Async uses
Despite using async for its implementation,
wasmtime-wasi
will still work for synchronous Wasmtime embeddings. A newtest-programs
suitewasi-preview2-components-sync
is identical towasi-preview2-components
in all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker
, rather thanwasmtime_wasi::command::add_to_linker
which requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on
.The
wasi-common
Preview 1poll_oneoff
implementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched
. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasi
we knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite.Features
wasmtime-wasi
Preview 2 support is now built on top oftokio
, and the host implementations for streams and pollable always useasync
Rust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream
,HostOutputStream
, andHostPollable
traits are used to implement thewasi:io/streams.{input-stream,output-stream}
andwasi:poll/poll.pollable
resources.
Host{Input, Output}Stream
design
HostPollable
designImplementations
wasmtime-wasi
provides a structAsyncReadStream
which wraps atokio::io::AsyncRead
to provide an impl ofHostInputStream
,AsyncWriteStream
to wrapsAsyncWrite
to provideHostOutputStream
, and, incfg(unix)
,AsyncFdStream
wrapstokio::io::unix::AsynFd
to provide both read and write stream impls.
AsyncReadStream
andAsyncWriteStream
will manage their own buffer in order for theHostInputStream::read
andHostOutputStream::write
methods to be non-blocking. These methods call theAsyncRead::poll_read
andAsyncWrite::poll_write
methods with a noop-waker Context (for now, this trivial implementation has been copied out ofstd
sources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdin
readiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream
, allows tokio to register stdin withepoll(7)
. Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()
returns astruct Stdin
which takes a lock on that singleton in theHostImputStream::{read, ready}
methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()
as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasi
will still work for synchronous Wasmtime embeddings. A newtest-programs
suitewasi-preview2-components-sync
is identical towasi-preview2-components
in all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker
, rather thanwasmtime_wasi::command::add_to_linker
which requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on
.The
wasi-common
Preview 1poll_oneoff
implementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched
. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.
dicej created PR review comment:
I wonder if there's a way to optimize this by reusing FuturesUnordered here. From the docs:
This structure is optimized to manage a large number of futures. Futures managed by FuturesUnordered will only be polled when they generate wake-up notifications. This reduces the required amount of work needed to poll large numbers of futures.
The implementation of
FuturesUnordered
interposes a custom waker which precisely tracks which futures have been woken, queuing them up to be polled at the next opportunity, and not wasting time polling the futures which haven't been woken. This can make a big difference when the number of pollables is large.
dicej created PR review comment:
I should mention that we wouldn't get the full benefit of
FuturesUnordered
here given the one-off nature ofpoll-oneoff
, so it might not be worth it until/unless we replacepoll-oneoff
with a resource that's intended to be polled repeatedly.
alexcrichton created PR review comment:
I definitely don't disagree about the benefits of
FuturesUnordered
, but I would personally say we should start off simpler here and see how things go. My guess is that a large number of futures would have many other inefficiencies other than the polling behavior here, and in the meantime, at least to me personally, I feel that a manual "poll the list" is easier to understand than theFuturesUnordered
abstraction
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasi
we knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasi
Preview 2 support is now built on top oftokio
, and the host implementations for streams and pollable always useasync
Rust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream
,HostOutputStream
, andHostPollable
traits are used to implement thewasi:io/streams.{input-stream,output-stream}
andwasi:poll/poll.pollable
resources.
Host{Input, Output}Stream
design
input-stream
andoutput-stream
resources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStream
has just two methods:fn read(&mut self, ...)
andasync fn ready(&mut self)
. The most important invariant is thatread
must be non-blocking, andready
must block until the stream is ready for reading.
HostPollable
designThe trick to
HostPollable
is to create a RustFuture
indicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>
to the resource itself. @alexcrichton pointed out the right design here:HostPollable
is an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}
means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::ready
method.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )
means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Implementations
wasmtime-wasi
provides a structAsyncReadStream
which wraps atokio::io::AsyncRead
to provide an impl ofHostInputStream
,AsyncWriteStream
to wrapsAsyncWrite
to provideHostOutputStream
, and, incfg(unix)
,AsyncFdStream
wrapstokio::io::unix::AsynFd
to provide both read and write stream impls.
AsyncReadStream
andAsyncWriteStream
will manage their own buffer in order for theHostInputStream::read
andHostOutputStream::write
methods to be non-blocking. These methods call theAsyncRead::poll_read
andAsyncWrite::poll_write
methods with a noop-waker Context (for now, this trivial implementation has been copied out ofstd
sources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdin
readiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream
, allows tokio to register stdin withepoll(7)
. Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()
returns astruct Stdin
which takes a lock on that singleton in theHostImputStream::{read, ready}
methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()
as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasi
will still work for synchronous Wasmtime embeddings. A newtest-programs
suitewasi-preview2-components-sync
is identical towasi-preview2-components
in all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker
, rather thanwasmtime_wasi::command::add_to_linker
which requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on
.The
wasi-common
Preview 1poll_oneoff
implementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched
. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.
pchickey has marked PR #6556 as ready for review.
pchickey requested fitzgen for a review on PR #6556.
pchickey requested wasmtime-core-reviewers for a review on PR #6556.
pchickey requested wasmtime-default-reviewers for a review on PR #6556.
pchickey requested alexcrichton for a review on PR #6556.
pchickey requested dicej for a review on PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasi
we knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasi
Preview 2 support is now built on top oftokio
, and the host implementations for streams and pollable always useasync
Rust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream
,HostOutputStream
, andHostPollable
traits are used to implement thewasi:io/streams.{input-stream,output-stream}
andwasi:poll/poll.pollable
resources.
Host{Input, Output}Stream
design
input-stream
andoutput-stream
resources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStream
has just two methods:fn read(&mut self, ...)
andasync fn ready(&mut self)
. The most important invariant is thatread
must be non-blocking, andready
must block until the stream is ready for reading.
HostPollable
designThe trick to
HostPollable
is to create a RustFuture
indicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>
to the resource itself. @alexcrichton pointed out the right design here:HostPollable
is an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}
means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::ready
method.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )
means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Implementations
wasmtime-wasi
provides a structAsyncReadStream
which wraps atokio::io::AsyncRead
to provide an impl ofHostInputStream
,AsyncWriteStream
to wrapsAsyncWrite
to provideHostOutputStream
, and, incfg(unix)
,AsyncFdStream
wrapstokio::io::unix::AsynFd
to provide both read and write stream impls.
AsyncReadStream
andAsyncWriteStream
will manage their own buffer in order for theHostInputStream::read
andHostOutputStream::write
methods to be non-blocking. These methods call theAsyncRead::poll_read
andAsyncWrite::poll_write
methods with a noop-waker Context (for now, this trivial implementation has been copied out ofstd
sources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdin
readiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream
, allows tokio to register stdin withepoll(7)
. Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()
returns astruct Stdin
which takes a lock on that singleton in theHostImputStream::{read, ready}
methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()
as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasi
will still work for synchronous Wasmtime embeddings. A newtest-programs
suitewasi-preview2-components-sync
is identical towasi-preview2-components
in all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker
, rather thanwasmtime_wasi::command::add_to_linker
which requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on
.The
wasi-common
Preview 1poll_oneoff
implementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched
. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.Bindings traits
After living with the bindings generated underneath
crate::preview2::wasi
andcrate::preview2::wasi::command
for a while, I decided that the interface traits belong undercrate::preview2::bindings
and command belongs undercrate::preview2::command
.Beyond that code motion, we now generate async traits for the
wasi:io/streams
andwasi:poll/poll
interfaces, and sync traits for everything else. If you want to use the bindings from an async embedding (i.e. with an async Config), thecrate::preview2::bindings::{interface}::add_to_linker
will do it, and if you want to use them from a synchronous embedding, usecrate::preview2::bindings::sync::{interface}::add_to_linker
.
alexcrichton submitted PR review:
Looking reasonable so far, but the high level things which I think need more looking into are:
- The
block_in_place
place function I don't think can ever be used correctly since it's used in contexts that shouldn't block for the express purpose of blocking.- Buffering internally I think needs to all be "capped" so it doesn't buffer up too much data on repeated non-blocking calls to write/read
- I only realized this later in review, but all
ready
futures need to be "cancel safe" where the internal objects are all left in a consistent state if the operation is cancelled.
alexcrichton submitted PR review:
Looking reasonable so far, but the high level things which I think need more looking into are:
- The
block_in_place
place function I don't think can ever be used correctly since it's used in contexts that shouldn't block for the express purpose of blocking.- Buffering internally I think needs to all be "capped" so it doesn't buffer up too much data on repeated non-blocking calls to write/read
- I only realized this later in review, but all
ready
futures need to be "cancel safe" where the internal objects are all left in a consistent state if the operation is cancelled.
alexcrichton created PR review comment:
I think this'd be good to have a doc-block explaining when it's intended to be used, basically exclusively as a bridge between the async implementation and the desire to have a synchronous implementation instead.
alexcrichton created PR review comment:
One possible update in the future is to use
bytes::Bytes
here since we're already buying in tokio stuff, but this also may not be used that heavily in which case it wouldn't matter too too much.
alexcrichton created PR review comment:
I think this is fine for now, but this I think is violating the contract of
read
where it should be non-blocking right? I think thatblock_in_place
will correctly not block other tasks but from the point of view of wasm this will still be blocking and "possibly taking awhile" since the host call is blocked in reading.Fixing this though I think will be somewhat involved and require similar adapters for async read/write where a
read
queues up a request for a read and then duringasync fn ready
it asynchronously blocks on a thread performing the read.That gets sort of funky though because then you have to say that a file is ready for reading but not actually queue up a read until a read is requested (or something like that).
Anyway that's just an idea, but probably not a good one. Something will need to change here though I think either at the WASI spec level or here or something though to ensure it's all consistent.
alexcrichton created PR review comment:
If you get a chance I think it'd be good to open an issue for avoiding this
_internal_io
wrapper module in the future. It seems like it's probably here to avoid clashes in twobindgen!
macros or you're hiding stuff generated bybindgen!
, but we can probably add more options to the macro in the future to fix that and avoid the need.
alexcrichton created PR review comment:
I might recommend
self.buffer.drain(..read_from_buffer)
for this perhaps
alexcrichton created PR review comment:
I think this'll want to check the length of
self.buffer
and only block inawait
if it's zero
alexcrichton created PR review comment:
Like above I think this would be good to have a doc block, but this ties in with my concerns on the file implementation in that I'm not sure we can actually support a function such as this since it's used in context which aren't supposed to block but this causes them to block?
alexcrichton created PR review comment:
I think the "fix" here might be to ignore this error actually? If an error happens I think the only error is disconnection, which means that all future attempts to write will also continue to see the same error, so this is otherwise swallowing the bytes that were previously written and reported as written but they ended up not being received in the end.
alexcrichton created PR review comment:
One thing I might recommend to simplify this a bit is to remove the
Writable
state since because this is a spsc channel once a reservation is done to acquire anOwnedPermit
then it'll forever be writable immediately by theInputStream
. Effectively inready
afterreserve_owned
I think it can callrelease
to avoid the need to store channel-or-permit.
alexcrichton created PR review comment:
I think this may not be quite right because technically
read
always succeeds, right?
alexcrichton created PR review comment:
I think it'd be worth mirroring what files/sockets do in this case. Well ok not files since they just extend the file but for sockets I think it's an EPIPE return (or equivalent thereof)
alexcrichton created PR review comment:
I think it'd be good to document what
bound
here is perhaps? I think that this ends up being a bound on "chunks" though rather than bytes which is probably what would be expected? I think it's ok to switch from chunks-to-bytes as a follow-up though.
alexcrichton created PR review comment:
I think that this is the method where the
bound
should be checked from above because otherwise this can infinitely be called and it'll never return "wouldblock" and it'll just keep buffering data.
alexcrichton created PR review comment:
One way to get rid of
Clone
,Arc
, andMutex
here might be to require users of this to lookup the pipe from within the original table that it was inserted into? Although I suppose that runs afoul of WASI closing it which would then drop this, so perhaps not. It feels a bit weird though to haveClone
for this type which shouldn't really be cloned and rather it's intended to be more of a "one end writes one end reads" sort of style.Anyway fine to handle this later.
alexcrichton created PR review comment:
Could you have a comment for why there's a hash map involved here?
(basically we can only mutably borrow each table entry at most once so all requests for the same table entry must be "canonicalized")
alexcrichton created PR review comment:
Unlike my thinking way up above about
block_in_place
for read/write on files I do think that it's the correct API to use here. This is fundamentally a blocking API (even as WASI defines it) so this is saying "hey tokio go do something as necessary as I'm about to block".In the context of an async server though since it locks up a whole thread where the "correct" thing would be to leave this as an
async fn
which does aspawn_blocking
. Given that I'm not sure how best to handle this. From a CLI point of view either way works, and from an async server point of view the actual real filesystem probably isn't used at all.Although are the underlying file objects here trait-objects as opposed to "always
cap_std::fs::File
"? If that's the case then the trait-object I would presume wants async methods, so the async-ness would need to be propagated here to the bindings?Hm ok I started out this comment thinking "ok
block_in_place
is right here" but now I'm less certain...
alexcrichton created PR review comment:
I don't think that this is quite right for a few reasons:
- I think the main intention here is that if
read_from_buffer
is 0 that this method returns EWOULDBLOCK instead of success.- Also a read succeeding with 0 typically means EOF, right? So this shouldn't ever return 0 unless it also returns
StreamState::Closed
?
alexcrichton created PR review comment:
Thinking about this again, I'm not sure that this is actually the desired semantics of
poll_oneoff
that we want? At least with systems like epoll/select/poll/etc when you wait for a set of operations the wait doesn't fail if one of the operations fails, but rather the wait says "that operation is ready" and when you got to get the result it returns the failure.In that sense this may actually want to change the
ready
function to beasync fn ready(&self) { ... }
with no return value and then if an error is to be returned it's buffered up within the object to get returned byread
orwrite
afterwards?
alexcrichton created PR review comment:
Hm ok so this makes me realize that my idea for "just record the table index" is wrong, although subtly so. Basically what you can do is create a
pollable
and then close the original stream and that would probably cause a panic here.One possible solution would be to prevent closure while there are active
pollable
objects. Looking into a resource-based future though that's not going to work since there's no hook to preventresource.drop
on resources.Instead this may need to record "this is being polled on" so when it's closed we don't actually fully close the object but instead put it in an area of "this is only here for pollable" and then that gets cleaned up when the pollable is closed. (or something like that, not entirely sure)
alexcrichton created PR review comment:
I commented on this above for something else as well, but I think the implementation here should be
Ok(())
instead of pending
alexcrichton created PR review comment:
I think it might be good to split this impl block to a "sink" type rather than hanging it on
EmptyStream
since for writing I think it's reasonable to say that either all writes are eaten without doing anything or no writes are accepted and it always returnsOk(0)
alexcrichton created PR review comment:
Should this return
StreamState::Closed
?
alexcrichton created PR review comment:
Same comments as way above, but I think this which is supposed to be non-blocking is actually blocking now?
alexcrichton created PR review comment:
Ah this is where I think the I/O split I mentioned above would work better. For example bytes may have been successfully written into the buffer but then ignored due to this error happening. I think it'd be better to split apart the operations so that if any bytes are ever succesfully read they're guaranteed to be read from this stream.
alexcrichton created PR review comment:
I think this may not be required due to the usage of
dest.write
above? I think that may update the buffer in-place to be shorter.
alexcrichton created PR review comment:
This doesn't feel quite right to me in that this shouldn't continuously ressize the buffer but instead if the buffer is non-empty then it's always ready for reading but a read is only done if the buffer is empty (in which case we could for example know it's already been initialized)
alexcrichton created PR review comment:
This is a bit tricky because the futures returned from
ready
need to all be "cancel safe", so if there's pending data in this buffer then it'll get lost if theawait
point below is cancelled.
alexcrichton created PR review comment:
Sort of like above with pipes but I think this'll want a form of backpressure to avoid the internal buffer from growing too large
alexcrichton created PR review comment:
The error here is tricky to handle because data has been buffered up from the beginning of the function but this is also reporting an error, so if the application re-wrote the data then it'd be written twice
alexcrichton created PR review comment:
In terms of "cancel-safety" this needs to be aware of partial writes that succeed, so I don't think this will be able to use
write_all
but instead will have toloop
and incrementally update the internal buffer.
alexcrichton created PR review comment:
Or put another way, each
buf
provided to this method I think should become a separate message on the channel and then at most oneVec<u8>
is buffered internally.
alexcrichton created PR review comment:
Alternatively files here might need
Arc<Mutex<...>>
or something similar. For example aread
would spawn a task to do the actual read and then the non-blockingread
is basically checking if the task is done, andready
is additionally checking if the task is done. That I think would make the append stream below make more sense since an append would spawn a task doing the write and then further writes/ready would wait for that first task to be done.
alexcrichton created PR review comment:
(also perhaps check
self.state
and skip therecv
if it's closed already, but that's more minor since it'll still work otherwise without that)
alexcrichton created PR review comment:
Or put another way, I would expec this type to be "always ready"
alexcrichton created PR review comment:
Although personally I think it's also reasonable to instead structure this entire method as:
if self.buffer.is_empty() { // do I/O } else { // do buffer stuff but no I/O }
to avoid mixing and matching the I/O branches
alexcrichton created PR review comment:
Or put another way (again, sorry), the
TrySendError::Full
case I think should return EWOULDBLOCK so callers know to go callpoll_oneoff
afterwards.
dicej submitted PR review:
Looks like a great start overall.
Echoing Alex's concerns about
block_in_place
:
- We should never use it in any function which must be non-blocking from the guest's perspective.
- We should probably also not use it in any function which _is_ blocking from the guest's perspective since it blocks the Wasmtime fiber, silently inhibits the composition of futures (e.g. for timeouts), and also blocks the embedder in a way it might not expect.
So we can use
spawn_blocking
instead, and store the future as part of the stream, but at that point we're kind of reinventingtokio::fs
. Thetokio
folks have already provided a battle-tested implementation, andtokio::fs::File
implementsAsyncRead
andAsyncWrite
, so I think we should reuse it wherever possible. I realize there are some cases where we need to usecap-std
to enforce sandboxing, but oncecap-std
gives us a file handle, we should be able to convert it to atokio::fs::File
and use that to handle the details of bridging the async and sync worlds. Does that sounds plausible?Regarding buffers: I don't think we should ever need to grow buffers beyond their original sizes (and certainly not without bound). I would expect that we would flush (for writes) or fill (for reads) a buffer before moving on to the next one (or reusing it without necessarily changing its size). If you believe there's a scenario where extending a buffer is unavoidable (or at least desirable), I'd like to have a conversation to understand that better.
Generally speaking, I would expect that neither
HostInputStream::read
norHostOutputStream::write
implementations would do _any_ I/O and would instead leave that toready
, which would fill or drainself.buffer
, respectively. That would avoid the need for the no-op waker and any kind ofblock_in_place
trickery. Or put another way, I/O should only be done in an async context where a real waker is available, thus guaranteeing wake events make it back topoll_oneoff
.Hope that's somewhat clear. Happy to discuss in more detail as necessary.
dicej submitted PR review:
Looks like a great start overall.
Echoing Alex's concerns about
block_in_place
:
- We should never use it in any function which must be non-blocking from the guest's perspective.
- We should probably also not use it in any function which _is_ blocking from the guest's perspective since it blocks the Wasmtime fiber, silently inhibits the composition of futures (e.g. for timeouts), and also blocks the embedder in a way it might not expect.
So we can use
spawn_blocking
instead, and store the future as part of the stream, but at that point we're kind of reinventingtokio::fs
. Thetokio
folks have already provided a battle-tested implementation, andtokio::fs::File
implementsAsyncRead
andAsyncWrite
, so I think we should reuse it wherever possible. I realize there are some cases where we need to usecap-std
to enforce sandboxing, but oncecap-std
gives us a file handle, we should be able to convert it to atokio::fs::File
and use that to handle the details of bridging the async and sync worlds. Does that sounds plausible?Regarding buffers: I don't think we should ever need to grow buffers beyond their original sizes (and certainly not without bound). I would expect that we would flush (for writes) or fill (for reads) a buffer before moving on to the next one (or reusing it without necessarily changing its size). If you believe there's a scenario where extending a buffer is unavoidable (or at least desirable), I'd like to have a conversation to understand that better.
Generally speaking, I would expect that neither
HostInputStream::read
norHostOutputStream::write
implementations would do _any_ I/O and would instead leave that toready
, which would fill or drainself.buffer
, respectively. That would avoid the need for the no-op waker and any kind ofblock_in_place
trickery. Or put another way, I/O should only be done in an async context where a real waker is available, thus guaranteeing wake events make it back topoll_oneoff
.Hope that's somewhat clear. Happy to discuss in more detail as necessary.
dicej created PR review comment:
According to the docs, it should be considered closed when
read
returns 0 whendest.len() > 0
.
dicej created PR review comment:
Nits: "poisoned" is misspelled, and per https://doc.rust-lang.org/std/result/enum.Result.html#recommended-message-style, this should probably read "mutex should not be poisoned".
dicej created PR review comment:
In addition to separating the buffer and I/O cases, you can put the whole thing in a loop that runs until either
dest
is full or the buffer is empty and I/O would block.
dicej created PR review comment:
This comment doesn't seem to be relevant anymore.
elliottt updated PR #6556.
pchickey updated PR #6556.
elliottt updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
pchickey submitted PR review.
pchickey created PR review comment:
Yeah, we'd have to do some challenging stuff to destroy the ctx, which is harder because the methods need both the ctx and table via WasiView, so we decided against changing this now.
We really only expect this impl to be used for tests.
pchickey submitted PR review.
pchickey created PR review comment:
An io error that occurs is buffered up, but we wanted to leave it open for
ready()
to trap execution if something is horribly wrong.
elliottt updated PR #6556.
pchickey submitted PR review.
pchickey created PR review comment:
Yes this is a pretty significant complication. Right now we have decided to punt on it by adding a big warning in the source code, land this PR, then see if any of the assumptions involved change as we switch over to resources, and then fix it. It would be a crash vector for the whole runtime, but it would also require some pretty major work, so I want to defer it to a future PR. We will fix it before we mark the preview2 implementation as production ready.
elliottt updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
elliottt updated PR #6556.
dicej submitted PR review:
Looks like a great start overall.
Echoing Alex's concerns about
block_in_place
:
- We should never use it in any function which must be non-blocking from the guest's perspective.
- We should probably also not use it in any function which _is_ blocking from the guest's perspective since it blocks the Wasmtime fiber, silently inhibits the composition of futures (e.g. for timeouts), and also blocks the embedder in a way it might not expect.
So we can use
spawn_blocking
instead, and store the future as part of the stream, but at that point we're kind of reinventingtokio::fs
. Thetokio
folks have already provided a battle-tested implementation, andtokio::fs::File
implementsAsyncRead
andAsyncWrite
, so I think we should reuse it wherever possible. I realize there are some cases where we need to usecap-std
to enforce sandboxing, but oncecap-std
gives us a file handle, we should be able to convert it to atokio::fs::File
and use that to handle the details of bridging the async and sync worlds. Does that sound plausible?Regarding buffers: I don't think we should ever need to grow buffers beyond their original sizes (and certainly not without bound). I would expect that we would flush (for writes) or fill (for reads) a buffer before moving on to the next one (or reusing it without necessarily changing its size). If you believe there's a scenario where extending a buffer is unavoidable (or at least desirable), I'd like to have a conversation to understand that better.
Generally speaking, I would expect that neither
HostInputStream::read
norHostOutputStream::write
implementations would do _any_ I/O and would instead leave that toready
, which would fill or drainself.buffer
, respectively. That would avoid the need for the no-op waker and any kind ofblock_in_place
trickery. Or put another way, I/O should only be done in an async context where a real waker is available, thus guaranteeing wake events make it back topoll_oneoff
.Hope that's somewhat clear. Happy to discuss in more detail as necessary.
elliottt updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
alexcrichton submitted PR review:
Nice :+1:
For the WIT changes, those are from the upstream proposals? Or local changes made temporarily for this PR?
alexcrichton submitted PR review:
Nice :+1:
For the WIT changes, those are from the upstream proposals? Or local changes made temporarily for this PR?
alexcrichton created PR review comment:
I find the semantics of this type a bit dubious, but is this necessary? Would it be possible to remove this?
alexcrichton created PR review comment:
Given the semantics of that this stream is never ready for reading, would a better default be the
ClosedInputStream
?
alexcrichton created PR review comment:
Should this case be
unreachable!
since if the state is "ready" then the task should always be trying to receive one buffer?
alexcrichton created PR review comment:
Instead of an
Option
here perhaps addClosed
toWriteState
?
alexcrichton created PR review comment:
I think that this may always want to be
Ok(())
sinceread
never blocks?
alexcrichton created PR review comment:
Had a good chuckle reading this :)
Regardless I agree we can't make this truly async and sync close here is the right thing to do IMO
pchickey submitted PR review.
pchickey created PR review comment:
Correct.
pchickey submitted PR review.
pchickey created PR review comment:
yeah, we did it because it corresponded to one of the output streams but now that you point it out, its really not necessary at all
pchickey edited PR review comment.
pchickey edited PR review comment.
pchickey updated PR #6556.
pchickey submitted PR review.
pchickey created PR review comment:
correct- that was left over from a prior factoring of this code, but its unreachable now
pchickey submitted PR review.
pchickey created PR review comment:
we had it factored that way, then we were taking advantage of Option::take() so changing the repr made sense, but now I think I can switch it back...
pchickey edited PR review comment.
pchickey updated PR #6556.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasi
we knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasi
Preview 2 support is now built on top oftokio
, and the host implementations for streams and pollable always useasync
Rust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream
,HostOutputStream
, andHostPollable
traits are used to implement thewasi:io/streams.{input-stream,output-stream}
andwasi:poll/poll.pollable
resources.
Host{Input, Output}Stream
design
input-stream
andoutput-stream
resources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStream
has just two methods:fn read(&mut self, ...)
andasync fn ready(&mut self)
. The most important invariant is thatread
must be non-blocking, andready
must block until the stream is ready for reading.
HostPollable
designThe trick to
HostPollable
is to create a RustFuture
indicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>
to the resource itself. @alexcrichton pointed out the right design here:HostPollable
is an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}
means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::ready
method.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )
means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Implementations
wasmtime-wasi
provides a structAsyncReadStream
which wraps atokio::io::AsyncRead
to provide an impl ofHostInputStream
,AsyncWriteStream
to wrapsAsyncWrite
to provideHostOutputStream
, and, incfg(unix)
,AsyncFdStream
wrapstokio::io::unix::AsynFd
to provide both read and write stream impls.
AsyncReadStream
andAsyncWriteStream
will manage their own buffer in order for theHostInputStream::read
andHostOutputStream::write
methods to be non-blocking. These methods call theAsyncRead::poll_read
andAsyncWrite::poll_write
methods with a noop-waker Context (for now, this trivial implementation has been copied out ofstd
sources until it becomes available in stable Rust) as a mechanism for providing a non-blocking read/write from synchronous code - @alexcrichton showed us that design.Additionally, we have provided an implementation that correctly handles waiting on
stdin
readiness on both windows and unix. In Unix, this is built on top ofAsyncFdStream
, allows tokio to register stdin withepoll(7)
. Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()
returns astruct Stdin
which takes a lock on that singleton in theHostImputStream::{read, ready}
methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, we use the same approach as in the wasi-common preview 1 scheduler implementation - a (global singleton, lazily created, guarded by a mutex) worker thread running
std::io::stdin().lock().fill_buf()
as a blocking function. The communication scheme from the old implementation was changed up to use tokio::sync as the messaging primitives. This implementation has never been executed outside of CI because I don't have a windows machine, and I don't think our test coverage of this code is particularly great, so I basically expect someone who tries this on windows to report bugs.Sync and Async uses
Despite using async for its implementation,
wasmtime-wasi
will still work for synchronous Wasmtime embeddings. A newtest-programs
suitewasi-preview2-components-sync
is identical towasi-preview2-components
in all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker
, rather thanwasmtime_wasi::command::add_to_linker
which requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on
.The
wasi-common
Preview 1poll_oneoff
implementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched
. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.Bindings traits
After living with the bindings generated underneath
crate::preview2::wasi
andcrate::preview2::wasi::command
for a while, I decided that the interface traits belong undercrate::preview2::bindings
and command belongs undercrate::preview2::command
.Beyond that code motion, we now generate async traits for the
wasi:io/streams
,wasi:poll/poll
, andwasi:filesystem/filesystem
interfaces, and sync traits for everything else. If you want to use the bindings from an async embedding (i.e. with an async Config), thecrate::preview2::bindings::{interface}::add_to_linker
will do it, and if you want to use them from a synchronous embedding, usecrate::preview2::bindings::sync::{interface}::add_to_linker
.
pchickey edited PR #6556:
When we landed the WASI Preview 2 support in
wasmtime-wasi
we knew the streams/poll/scheduler implementation was going to get torn out and rewritten: see task 4 in https://github.com/bytecodealliance/wasmtime/issues/6370. This is that rewrite. Most of this was pair-programmed with @elliottt.Features
wasmtime-wasi
Preview 2 support is now built on top oftokio
, and the host implementations for streams and pollable always useasync
Rust. It now passes all of the poll_oneoff tests, including on windows.The new
HostInputStream
,HostOutputStream
, andHostPollable
traits are used to implement thewasi:io/streams.{input-stream,output-stream}
andwasi:poll/poll.pollable
resources.
Host{Input, Output}Stream
design
input-stream
andoutput-stream
resources have a significant number of methods. For the host traits, we managed to boil this down to two required methods, as well as some optional methods to cover vectored IO, splice, and write-zeroes. The default implementations provide an unoptimized but correct implementation - the user may override them if an optimization is desired.
HostInputStream
has just two methods:fn read(&mut self, ...)
andasync fn ready(&mut self)
. The most important invariant is thatread
must be non-blocking, andready
must block until the stream is ready for reading.Nonblocking IO is actually impossible on regular files, but we hide that detail from WASI programs
There's a huge lie in the description above: internally, wasmtime_wasi uses
InternalHost{Input,Output}Stream
, which are enums that contain either aHost{Input,Output}Stream
, or a privateFile{Input,Output}Stream
. These file streams always perform a blocking read/write, whether or not the WASI program has called the nonblockingread
or the blockingblocking_read
method. And, any pollable associated with file read or write readiness always returns ready immediately.This is required because there is no way to implement streams on files with
epoll(7)
(or on macos or windows using mio, afaik) based readiness -poll(2)
will lie and always tell you files are ready for reading of writing, but the actual syscall may end up blocking the thread due to disk IO. And, rather than stick the work on a background task, we must wait until the OSread(2)
orwrite(2)
for a file has completed before returning from wasi stream (nonblocking) read or write, because the userlands in WASI expect (reasonably!) that any errors in regular file IO are reported synchronously.So, we put an escape hatch into wasmtime-wasi which is just for regular file IO. It
awaits
for a tokiospawn_blocking
to complete in both the blocking and non-blocking implementations of the stream reads and writes. Like all the other syscalls in the filesystem implementation, we need theasync
rust to put any blocking syscalls in aspawn_blocking
, in order to not block the executor.We spent a the better part of a week trying to square this circle and this is the best we could come up to make files work like we expect, but also expose the right interfaces from wasmtime-wasi so that all other streams have the proper non-blocking semantics.
HostPollable
designThe trick to
HostPollable
is to create a RustFuture
indicating whether a stream (or some other resource, e.g. http bodies) is ready without requiring both the stream and pollable hold someArc<Mutex<...>>
to the resource itself. @alexcrichton pointed out the right design here:HostPollable
is an enum with two variants:
TableEntry { index: u32, make_future: for<'a> fn(&'a mut dyn Any) -> Pin<Box<dyn Future = Result<()>> + Send + 'a>}
means, get a mutable reference to some (still dynamically-typed) other entry in the table, and apply this fn to create a Future from that entry, which has the same lifetime as the mutable reference. For the streams, we use this variant, and make_future is just a call to theHost{Input,Output}Stream::ready
method.Closure( Box<Fn() -> Pin<Box<dyn Future = Result<()>> + Send + 'static>> + Send + Sync + 'static> )
means, use this owned closure to create a Future with a static lifetime. This is used for creating timers from the monotonic and wall clocks, which are ambient and therefore do not have table entries.Note that we discovered a major design problem with the
TableEntry
variant that exposes a crash vector if the parent resource of a pollable is destroyed before the pollable itself. This is a big enough problem that we are setting it aside to be solved in a follow-up PR.Implementations
wasmtime-wasi
provides a structAsyncReadStream
which wraps atokio::io::AsyncRead
to provide an impl ofHostInputStream
,AsyncWriteStream
to wrapsAsyncWrite
to provideHostOutputStream
.
AsyncReadStream
andAsyncWriteStream
will manage their own buffer in order for theHostInputStream::read
andHostOutputStream::write
methods to be non-blocking. Each spawns a helper tokio::task to perform async operations in the background. This requires some buffering.Note, we discussed ensuring these background tasks are reaped properly - right now we believe that the implementations are correct and they will exit on their own when the foreground side gets dropped. However, in the future to implement e.g. splice and forward correctly, we may need to allow these tasks to (optionally!) live longer than the WASI program's execution.
Additionally, we have provided an implementation that correctly handles waiting on
stdin
readiness on both windows and unix. In Unix, this is built on top oftokio::os::unix::AsyncFd
, allows tokio to register stdin withepoll(7)
. Because stdin is a process-wide singleton, and also because epoll will return an error if the same fd is registered multiple times, the stdin resource is a global singleton, created lazily and guarded by a mutex.wasmtime_wasi::preview2::stdio::stdin()
returns astruct Stdin
which takes a lock on that singleton in theHostInputStream::{read, ready}
methods. This means that, if for some reason you have granted the process stdin to multiple wasi streams (in the same, or different, stores) you can get all sorts of strange behavior because the same global resource backs them all behind the scenes - but this is always the case, the implementation in wasmtime_wasi just allows you to do so without hanging or panicking.On Windows, the implementation of stdin is broken, and inheriting stdin will panic. We have a path to fixing it, but we are leaving that out of this PR, because it has been open for far too long, and far too much work is predicated on landing it.
Sync and Async uses
Despite using async for its implementation,
wasmtime-wasi
will still work for synchronous Wasmtime embeddings. A newtest-programs
suitewasi-preview2-components-sync
is identical towasi-preview2-components
in all regards except that it uses wasmtime's synchronous APIs, and the appropriatewasmtime_wasi::command::sync::add_to_linker
, rather thanwasmtime_wasi::command::add_to_linker
which requires an async Config. The synchronous implementation creates its own Tokio runtime (if one does not already exist) and then invokes the async implementation inside of ablock_on
.The
wasi-common
Preview 1poll_oneoff
implementation allowed for a synchronous implementation to work without any sort of dependency on tokio, by abstracting it behind a pluggableWasiSched
. This PR eliminates that option for our Preview 2 users. If there is a compelling need, and labor available to implement and maintain it, we could collaborate on re-introducing a fully synchronous tokio-free option in the future. For this implementation, I decided that we just need a tokio backend in order to build a path towards supporting both wasi-sockets and wasi-http with the same modular stream and pollable host traits.Bindings traits
After living with the bindings generated underneath
crate::preview2::wasi
andcrate::preview2::wasi::command
for a while, I decided that the interface traits belong undercrate::preview2::bindings
and command belongs undercrate::preview2::command
.Beyond that code motion, we now generate async traits for the
wasi:io/streams
,wasi:poll/poll
, andwasi:filesystem/filesystem
interfaces, and sync traits for everything else. If you want to use the bindings from an async embedding (i.e. with an async Config), thecrate::preview2::bindings::{interface}::add_to_linker
will do it, and if you want to use them from a synchronous embedding, usecrate::preview2::bindings::sync::{interface}::add_to_linker
.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey has enabled auto merge for PR #6556.
pchickey updated PR #6556.
pchickey updated PR #6556.
pchickey merged PR #6556.
Last updated: Nov 22 2024 at 17:03 UTC