Hello, I'm a bit confused about how true multi-threading is achieved with Wasmtime. As far as I currently understand a potential implementation is:
Create a single Engine and use e.g. tokio to create green threads where each green thread instantiates a component.
But in that case the Engine still only really exists on a single OS thread? Should I create an Engine per OS thread?
we do have support for the wasi-threads proposal afaik -- cc @Andrew Brown for more details?
Could you clarify what you mean by "true multithreading"?
To provide some other thoughts in the meantime though:
Engine
is intended to be one-per-process, shared across all threads.Store
is the one that's per-thread (sorta). A Store
can only be executing on one store at a timewasi-threads
proposal, both of which are experimentalMultithreading today can happen with concurrent execution of unrelated wasm instances in different threads (e.g. one Engine
plus one Store
per-instance-per-thread)
afaik, wasi-threads is used to have the guest spawn multiple threads. Instead, we would like to run multiple guests in parallel, if that makes sense?
We're not talking about multiple threads within the same wasm instance. Instead, we're talking about multiple instances running in parallel on the same engine, if that makes sense.
Yes wasi-threads isn't required for multiple guests in parallel and that's supported today with wasmtime. And yes you'd have one Engine
and one store per instance
er sorry, one Engine
in total, and then every store-per-instance would refer to the same Engine
Do you need the "async" functionality in order to have multiple instances on the same engine?
no, you can do it with sync calls too
async in that sense is an independent decision of how to run wasm
another useful detail: an Engine
is internally an Arc wrapper around the actual data; so it's meant to be shared between threads, cloned cheaply, etc
likewise Module
s can be (and should be) loaded once and shared between threads
OK, still trying to figure out what async does exactly. When you have two instances calling a regular (not async) host function. Does one instance have to wait on the other call to complete?
No they can complete in parallel. You'll probably use a single Linker
which provides a single definition of the host function, and the host function is defined as impl Fn(...) + Sync
which indicates it'll be called in parallel
instance-local state is provided via the Caller<'_, T>
function parameter to the host function which can be used to access the T
in the Store<T>
which has per-instance state
async is required if:
async
otherwise you probably don't need async
time-slicing between different instances?
yeah for example if you want to prevent an infinite loop from hogging time from other instances
So if I have a PC with 4 cores, and 5 instances with an infinite loop, only 4 will actually execute at all times?
it depends, if those 5 instances are in 5 OS threads then no, the OS will time-slice between threads
if those 5 instances are on 4 OS threads then yes, those 4 OS threads will be locked up by the infinite loops
you can set up something like epochs/fuel to time out the wasm instances, but the OS threads will be locked up while the infinite loop is executing othewise
Does the instance run in the OS thread where it is started? Or is there a separate thread where the engine + instances run? I was confused by the explanation about the engine being "shared" between threads.
in the OS thread it was started on
Wasmtime doesn't spawn any threads for wasm execution, that's up to the embedder to configure
sharing the engine is done by the embedder as well by having it as part of the closed-over-state for new threads
I think I'm starting to get it now. What is closed-over-state?
Something like:
let engine = ...;
std::thread::spawn(move || {
foo(&engine); // the outer closure has `engine` in its closed-over-state
});
So if e.g. I use tokio to spawn instances, they could if not careful, spawn on the same thread as my actual executor, blocking new instances of spawning?
correct, if using tokio it's recommended to enable async support (since your host functions are probably async) and to additionally enable epochs plus epoch_deadline_yield_and_update
to enable time-slicing and avoiding blocking the executor
that forces wasm to "yield" every period of epoch intervals which provides the ability to cancel wasm (e.g. time it out) or otherwise just let other stuff run if there's more
(just as a point of reference, this isn't really a problem unique to Wasmtime: any system that calls guest code from an executor has to worry about that code blocking the executor loop)
Ok, we're seeing this behavior with sleep too. Is that because, when a guest calls "sleep", it sleeps the entire OS thread? Or is it simply that the OS thread is locked by the executor doing someting like "poll()" in that thread?
yes, it'd be the same as native Rust code called by an async executor calling the system sleep(). Runtimes like tokio provide alternatives for things that would usually block, that integrate with the executor to yield instead
(including sleeps, async versions of IO, async versions of mutexes and queues, that sort of thing)
Does this mean that the fuel
concept doesn't help trying to unblock threads with sleeping guests?
if you call into the kernel and ask for the entire thread to sleep, there's nothing we can do to stop that
Ok, I didn't realize sleep
in the guest goes straight to the host kernel.
iiuc, the wasi impl of sleep will yield when using the async version of wasi
ah, unclear, sorry, I had thought you meant you had a hostcall that directly called sleep
ok, thanks for the clarification!
Hopefully, my final question: we'd like to launch X instances on X new OS threads, so the total number of threads is X+1 (thread launching other threads). Is this possible with Tokio?
One option is to use std::thread::spawn
which guarantees a thread is spawned. Another is to use spawn_blocking
but that won't guarantee a thread is spawned (e.g. the thread pool may already be full)
Great, thanks a lot!
Merlijn Sebrechts has marked this topic as resolved.
Merlijn Sebrechts has marked this topic as unresolved.
Merlijn Sebrechts has marked this topic as resolved.
Merlijn Sebrechts said:
Ok, we're seeing this behavior with sleep too. Is that because, when a guest calls "sleep", it sleeps the entire OS thread? Or is it simply that the OS thread is locked by the executor doing someting like "poll()" in that thread?
that is true for the synchronous implementation of wasi preview 1 (wasi-cap-std-sync). for the async impl of wasi preview 1, and all impls of preview 2, its actually just a tokio yield for the duration of the sleep
Ah, makes sense, thanks!
Last updated: Dec 23 2024 at 14:03 UTC