Is there a way (or a plan for a way) to indicate that a module or component is reentrant but not necessarily thread-safe? I.e. its export(s) may be called concurrently (e.g. while some number of earlier calls are blocked on host functions or on the proposed canon.wait
built-in) but not necessarily in parallel?
I'm assuming a shared
memory per the threads proposal would be overkill in that case, and it would be nice if a toolchain could opt into concurrency without opting into parallelism.
I think the answer is probably "no" to this question, but that hinges on a few assumptions:
All that being said I think it's going to be the case for the forseeable future that "don't use shared" indicates possible concurrency without parallelism. AKA today's default output of most compilers already disallows parallelism while allowing concurrency
I realized I wasn't clear about what I meant by "reentrant" and "concurrency" in my question, so let me clarify that. I don't just mean that the guest can call the host, which can call back into the guest (and I agree that we can reasonably expect that to work for most or all existing modules). What I mean is that the host could create an arbitrary number of fibers performing an arbitrary number of guest calls to a module.
As a concrete example (and leaving aside components for a moment; let's assume plain core modules): Say the host creates a fiber (call it A
) and uses it to call a guest export, which then calls a host import, which needs to wait for some I/O and thus suspends A
. While A
is suspended, the host creates another fiber (B
) and uses that to call a guest export, which also calls a host import, and that suspends B
. Then the I/O operation A
was waiting for completes, and the host resumes A
. If there's single a shadow stack managed via a global variable, we'll have a problem when A
returns to the guest, because now the guest will be reading from and/or clobbering data that B
put on that shadow stack. Likewise, there could be other global state managed by either the toolchain which produced the module or by the application developer which might be invalidated by calls by concurrent fibers.
Currently, we have to conservatively assume that an arbitrary module is not compatible with the above scenario, and I was wondering if there might be a way for a module to signal that it can handle such a scenario, but not necessarily parallel, preemptive threading.
BTW, @Luke Wagner and I chatted about this elsewhere, and his idea is to handle this at the component model level via a async lifts without a callback, which would tell the host: "You can call me concurrently from separate fibers, but I'm not using the callback ABI, so just suspend any fibers that need to wait for I/O and I'll make sure the concurrency doesn't clobber anything." Presumably the toolchain that produces such a component would know how to manage multiple shadow stacks, etc.
Putting it in the lifts would presumably mean it's not part of the world, however this kind of reentrancy sounds significant for host/guest compatibility.
If we're talking about concurrent calls into a component from multiple threads, agreed that that goes on the function type (via shared
, inheriting its name and meaning from core wasm). But if we're just talking about (non-parallel) concurrency, then the reason I think it can just go on the lifts and not on the types is because the caller and callee can lift/lower sync/async independently, and it composes according to the scheme sketched here. Basically, the caller can always try to make multiple concurrent calls, they might just get backpressure (once the number of concurrent calls would be >1) if the callee lifts synchronously. This avoids having to partition or duplicate all of WASI in to sync and async variants, "the function coloring problem", etc.
Last updated: Nov 22 2024 at 17:03 UTC