Stream: general

Topic: ✔ rustix, io_uring, rustix-uring place in the rust ecosystem


view this post on Zulip Frank Rehwinkel (Apr 05 2024 at 16:11):

Would someone shed some light on how rustix's io_uring module, and the rustix-uring crate its docs refer to for a higher level io_uring interface, are meant to sit in the Rust ecosystem?

I'm familiar with the kernel's io_uring API, and the Linux liburing library API and how they evolve. And pretty familiar with the tokio/io-uring crate although not so clear about the tradeoffs about the ramifications of its direct-syscall feature that uses the sc crate.

It looks like rustix-uring was forked from tokio/io-uring nine weeks ago but the latest commits to rustix-uring have me confused because they are dated from 2023, not 2024.

I'm unclear whether the rustix/io_uring module is meant to be better supporting WASM or WASM components in the long run as it is a BytecodeAlliance project.

Is it intended that Rust async crates that want to build on the sync tokio/io-uring (which really doesn't depend on tokio), can use a feature flag to use rustix-uring instead or is the API likely to diverge over time?

tokio/io-uring already has a feature flag for using the sc crate for its syscalls. Was it not possible to propose a rustix/linux_raw alternative?

Is there a plan for one day allowing the kernel's io_uring shared mmap requirements to be directly accessible from a WASM guest? The kernel's submission and completion queues have to be in mmap regions, and some fancier features that pre-populate pools of buffers also use mmaps in the liburing example code.

Thanks for your patience with my questions. Everyone here has much more experience thinking about WASM and async in WASM than I do.

The `io_uring` library for Rust (with Rustix). Contribute to jordanisaacs/rustix-uring development by creating an account on GitHub.

view this post on Zulip Tarek Sander (Apr 05 2024 at 16:17):

I don't think that would fit the WASM sandbox requirements: The kernel knows only the process and has no concept of linear memory, so the WASM guest could easily read and write outside it's linear memory with direct access to the io_uring shared queues. Pre-processing isn't possible because the kernel could in principle use the shared queues any time. It's more likely a WASM engine will support async IO with io_uring as a possible backend, not giving the guest direct access.

view this post on Zulip Frank Rehwinkel (Apr 05 2024 at 16:25):

I wondered if a host could give a guest a second linear memory that it had mmap'ed so the guest and the kernel could share that area. I thought the guest would have to be told the actual offset of the area from the host's perspective so what it read and wrote into that memory would have values that the kernel could relate to. That didn't seem any more dangerous than what the kernel's io_uring API already exposes to user space.

view this post on Zulip Tarek Sander (Apr 05 2024 at 16:56):

Frank Rehwinkel said:

I wondered if a host could give a guest a second linear memory that it had mmap'ed so the guest and the kernel could share that area. I thought the guest would have to be told the actual offset of the area from the host's perspective so what it read and wrote into that memory would have values that the kernel could relate to. That didn't seem any more dangerous than what the kernel's io_uring API already exposes to user space.

That's only true if you regard the process as a whole. WASM doesn't work that way, the WebAssembly instances and the host runtime are logically separated, which is enforced in all parts of WASM. The kernel doesn't allow to protect memory regions from io_uring scatter/gather AFAIK, so your "tell the guest its memory offset" idea violates the memory separation between guest and host. It's essentially a "trust me bro, I won't overwrite or read outside my own memory" guarantee from the guest, which is not enough.

view this post on Zulip Frank Rehwinkel (Apr 05 2024 at 17:14):

Maybe I understand your point. It would be foolish for a host to give the guest such access to the kernel's io_uring submission queue directly because the scatter/gather commands of the io_uring API would allow the guest access to all of the host's address space, regardless of the mmap base address in question. If the host were willing to implicitly trust a single guest component with that kind of power (maybe they come from the same repo), the host might as well build the io_uring facing side of the component into its own support and not run it as WASM code at all.

I still have the other general questions about rustix/io_uring and rustix-uring. Thank you.

view this post on Zulip Dan Gohman (Apr 06 2024 at 17:17):

Indeed, there are no specific plans right now to use rustix::io_uring or rustix_uring for anything wasm-related.

view this post on Zulip Dan Gohman (Apr 06 2024 at 17:18):

And yes, the timestamps on the commits for rustix-uring are misleading because the repo has been rebased on top of upstream tokio/io-uring. It is fairly up to date.

view this post on Zulip Dan Gohman (Apr 06 2024 at 17:20):

I agree, exposing the raw host submission and completion queues to guest Wasm would have several hazards and awkwardnesses. But an interesting thing about Wasm is that host calls are much cheaper than actual host syscalls, so we could in theory get away with having functions for "push sqe" and "pop cqe" instead of exposing the raw queues.

view this post on Zulip Dan Gohman (Apr 06 2024 at 17:27):

That said, another thing that's happening is that WASIp3's async is being designed to be completion-oriented, similar to io_uring, and in theory it should be possible to implement on top of io_uring. See here for a recent talk about it, and here where prototyping is happening (though not with io_uring specifically).

I sync, you sync, we all sync for async! Contribute to dicej/isyswasfa development by creating an account on GitHub.

view this post on Zulip Frank Rehwinkel (Apr 06 2024 at 18:23):

Thank you for these answers. I'm still trying to figure out what one can assume when one sees a Bytecode Alliance project. And my interest in either contributing to the tokio/tokio-uring project further or forking it to pursue my own ideas made me wonder about the ability to support just the tokio/io-uring dependency or whether it could be possible to provide a feature that selected it or the rustix_uring as a dependency.

Maybe rustix_uring is meant to be a fork of tokio/io-uring that uses the rustix library as a dep, with rebases from time to time. I guess it's easy enough to change code down the road if that is found to be possible.

I do like the idea of creating a WIT definition allowing the guest to access a host's io_uring file descriptor. Still haven't read enough about WIT, the component model, or resources, to ask intelligent questions and I didn't mean to do so here. Your posts and others have pointed to plenty of reading material I'm slowly getting through. I hope to be able answer the question of whether such a resource could be passed from a higher level component to a lower one.

I had come to understand that the explanations of WASIp3 being like io_uring in that it would be completion based rather than readiness based was actually orthogonal to when the kernel's io_uring interface could be used. I'm used to the tokio/tokio-uring providing io_uring access to file descriptors in the context of the tokio readiness-based runtime. The io_uring file descriptor is watched through epoll by tokio and the tokio-uring library creates the completion-model for tasks that want to await an operation's completion.

Yes, the isyswasfa ecosystem is very interesting because it lets such a library continue to use Rust await syntax and not deal with Pollables itself.

And probably coming full circle in my plans, I guess I will code to the WASIp3 ideas that let Components be fully async parts of composed wasm binary. I definitely appreciate the idea that individual components don't have to have their own async runtimes.

Thanks again.

view this post on Zulip Notification Bot (Apr 06 2024 at 18:32):

Frank Rehwinkel has marked this topic as resolved.


Last updated: Dec 23 2024 at 13:07 UTC