Stream: git-wasmtime

Topic: wasmtime / issue #9491 wasm-SPIRV


view this post on Zulip Wasmtime GitHub notifications bot (Oct 21 2024 at 10:25):

SkillfulElectro opened issue #9491:

Feature

SPIRV compilation target

Benefit

Adding SPIRV target for wasm makes it the best way to write code once and use it with CPU and GPU so it's going to be powerful option to use

Implementation

I think we should convert or code to naga-ir , and then use wgpu for running or spirv . I think abstraction over memory allocation copy and etc cpu<->gpu transfers can improve development time

Alternatives

Directly compiling to spirv

view this post on Zulip Wasmtime GitHub notifications bot (Oct 21 2024 at 11:01):

bjorn3 commented on issue #9491:

Wasm and SPIR-V have a fundamentally different memory model from each other. Wasm models memory as a single array of bytes, while SPIR-V models it as a bunch of typed objects. Some of these may be arrays into which you can index, but it fundamentally doesn't support arbitrary pointers like wasm does. https://github.com/EmbarkStudios/spirt can lift some uses of untyped memory (rust, wasm, ...) into typed memory (spir-v), but can't lift all of them. Also to make any effective use of a GPU you also need to support work-group local memory and more, which wasm doesn't support.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 21 2024 at 12:56):

SkillfulElectro commented on issue #9491:

Wasm and SPIR-V have a fundamentally different memory model from each other. Wasm models memory as a single array of bytes, while SPIR-V models it as a bunch of typed objects. Some of these may be arrays into which you can index, but it fundamentally doesn't support arbitrary pointers like wasm does. https://github.com/EmbarkStudios/spirt can lift some uses of untyped memory (rust, wasm, ...) into typed memory (spir-v), but can't lift all of them. Also to make any effective use of a GPU you also need to support work-group local memory and more, which wasm doesn't support.

All you say is right but , it's still possible .
For last one For example we can add option to user set them , if not setted by default 1,1,1 . Or for first the goal is to job be done not how it done

view this post on Zulip Wasmtime GitHub notifications bot (Oct 21 2024 at 16:10):

cfallin commented on issue #9491:

@SkillfulElectro thanks for the issue!

I was involved in some discussions around this in 2020 or so -- and the conclusion then was essentially the same as @bjorn3's points now, that the target is quite different and this would not be an easy adaptation. The use-case in question ended up finding a different way to program GPUs portably.

That discussion was purely about a Cranelift port, but the Wasm runtime as well is an even bigger question mark: what would it mean for Wasmtime to run on a GPU where there is no operating system, (sometimes) no virtual memory, etc.? Or does the Wasm VM get split between GPU and CPU, with (expensive) calls between them?

And then how does one actually take advantage of the parallelism? Do we need a new "vectorized Wasm call" API in Wasmtime? (Keep in mind that a single thread of a GPU has lower performance than a single thread on a CPU; GPUs only make sense when leveraging the SIMT model. And SIMT != SIMD, i.e., the programming model is not the same as what Wasm has exposed for data parallelism.)

What do we do about branch divergence? Do we have estimates or modeling that show this would be reasonably low overhead for typical Wasms?

For all these reasons I'm pretty skeptical. That doesn't mean we should shut down discussion now, at all. What it does mean is that probably there should be a more detailed writeup: what is the use-case, how would all of these high-level design questions be resolved, etc. This should probably take the form of an RFC discussion eventually, but before that, it would help if you could write a bit more about motivation and these other questions here.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 22 2024 at 04:55):

SkillfulElectro commented on issue #9491:

@SkillfulElectro thanks for the issue!

I was involved in some discussions around this in 2020 or so -- and the conclusion then was essentially the same as @bjorn3's points now, that the target is quite different and this would not be an easy adaptation. The use-case in question ended up finding a different way to program GPUs portably.

That discussion was purely about a Cranelift port, but the Wasm runtime as well is an even bigger question mark: what would it mean for Wasmtime to run on a GPU where there is no operating system, (sometimes) no virtual memory, etc.? Or does the Wasm VM get split between GPU and CPU, with (expensive) calls between them?

And then how does one actually take advantage of the parallelism? Do we need a new "vectorized Wasm call" API in Wasmtime? (Keep in mind that a single thread of a GPU has lower performance than a single thread on a CPU; GPUs only make sense when leveraging the SIMT model. And SIMT != SIMD, i.e., the programming model is not the same as what Wasm has exposed for data parallelism.)

What do we do about branch divergence? Do we have estimates or modeling that show this would be reasonably low overhead for typical Wasms?

For all these reasons I'm pretty skeptical. That doesn't mean we should shut down discussion now, at all. What it does mean is that probably there should be a more detailed writeup: what is the use-case, how would all of these high-level design questions be resolved, etc. This should probably take the form of an RFC discussion eventually, but before that, it would help if you could write a bit more about motivation and these other questions here.

Well all of his point is correct but look all the languages compile to wasm so if we can run wasm it means we can run all of our ordinary code without touching on GPU . Also what about compiling to wgsl if managing things this way is hard? We just need to add a functionality to user specify number of blocks and workgroups size and compilation needs to be done once so it's cheap price to make simple codes run on GPU and we can reuse that module again without recompiling . For this we can use wgpu and naga

view this post on Zulip Wasmtime GitHub notifications bot (Oct 22 2024 at 05:08):

cfallin commented on issue #9491:

so if we can run wasm it means we can run all of our ordinary code without touching on GPU

Yes, I don't think anyone doubts that having this target would be very useful. The difficult design questions are really the heart of the problem though -- the question is how to map Wasm to the GPU programming abstraction in a way that makes sense and yields speedup. I'd invite you to give your thoughts on any of the questions I wrote out above!

(I'll actually say a little more directly: the way open-source works is that interested parties come in with time and energy and drive interesting new directions or additions to projects. Leaving a comment asking for a very general high-level goal, and then arguing why you want it without driving the engineering, isn't likely to lead anywhere. What I'm trying to steer you toward is driving the design exploration here yourself, in a way that could break the problem down into actionable pieces.)

view this post on Zulip Wasmtime GitHub notifications bot (Oct 22 2024 at 13:30):

SkillfulElectro commented on issue #9491:

@cfallin oki so i think first of all why would we need to use GPU? parallel computing . so some of wasm file types cant be compiled to GPU kernel functions which are using WASI or others which are not related to computing , second we create an struct which stores number of blocks in each dim and number of workgroups (threads) in each block , third the wasm function must get index of type int as its first para and an array of supported data types by wgsl as its second para , with this simple rules most of codes which are compiled to wasm can be ran on gpu . now we compile the wasm bytecode to wgsl and pass it to wgpu ( i say wgpu because i am familiar with it ) the index becomes global invocation id and its calculation and data storage arrays or textures becomes input of our array and we pass them to gpu buffer using wgpu buffer also wasm functions which are compiled to run on GPU must not return anything . their returned value must be written on the input arrays

we have multiple GPU devices for example in the server or smth . so we need to add a way to iterate over them by index for choosing prefered device . smth like https://github.com/SkillfulElectro/EMCompute/blob/main/src/gpu_device.rs .

view this post on Zulip Wasmtime GitHub notifications bot (Oct 22 2024 at 16:29):

cfallin commented on issue #9491:

@SkillfulElectro thanks for your reply. I think there needs to be a deeper exploration of the engineering tradeoffs here. I'll go through your points and my questions above to try to help guide you a bit.

first of all why would we need to use GPU? parallel computing

Sure, again, no one is doubting how useful this would be if it were built!

second we create an struct which stores number of blocks in each dim and number of workgroups (threads) in each block , third the wasm function must get index of type int as its first para and an array of supported data types by wgsl as its second para , with this simple rules most of codes which are compiled to wasm can be ran on gpu

This is a very high-level and vague description of a more detailed system design that I think you have in your head. A few followup questions that could help expand it:

At a higher level, I'll repeat the questions I wrote above; we need crisp answers to all of these I think:

view this post on Zulip Wasmtime GitHub notifications bot (Oct 22 2024 at 18:02):

SkillfulElectro commented on issue #9491:

@cfallin well you are right wasmtime is just runtime for wasm .

What kind of computation is this intended for? Is there one Wasm instance overall, or are there many invocations of a single Wasm instance, and we are taking blocks of them to run as GPU warps? (I suspect the latter, but let's say it explicitly.)

You say "the Wasm function must get index ... and an array of supported data types ...": here you seem to be confusing the Wasm abstraction layer with a higher-level ABI of some sort. Wasm in general supports functions of any allowable signature. Are you describing a way to use this parallelized Wasm instance invocation for a certain problem type?

" also wasm functions which are compiled to run on GPU must not return anything" -- as above: a Wasm engine has to be able to support any Wasm module; we can't ship something that only works for a small subset of Wasm, as that wouldn't be Wasm anymore.

"now we compile the wasm bytecode to wgsl and pass it to wgpu" -- this single statement is encapsulating the hardest part, with many many open questions. All of the questions above apply, and we need to argue that we can support all the needed abstractions (Wasm heaps, tables, hostcalls, etc). Not to mention the questions around compiling to the target: register allocation, do we make use of scratchpad memory or not, etc.

Does the Wasmtime runtime itself run on the GPU or the CPU? If the CPU, how do we handle hostcalls? If the GPU, are we convinced that every abstraction Wasmtime needs is available on the GPU, with no operating system underneath it?

Given the answers to the above, do we have some early evidence, even napkin math of some sort, that this will yield feasible performance?

Do we have a "vectorized Wasm call" API in Wasmtime? Or some other way to build the parallel invocation (lazy batching or something)?

view this post on Zulip Wasmtime GitHub notifications bot (Nov 08 2024 at 17:43):

fitzgen closed issue #9491:

Feature

SPIRV compilation target

Benefit

Adding SPIRV target for wasm makes it the best way to write code once and use it with CPU and GPU so it's going to be powerful option to use

Implementation

I think we should convert or code to naga-ir , and then use wgpu for running or spirv . I think abstraction over memory allocation copy and etc cpu<->gpu transfers can improve development time

Alternatives

Directly compiling to spirv

view this post on Zulip Wasmtime GitHub notifications bot (Nov 08 2024 at 17:43):

fitzgen commented on issue #9491:

Running Wasm on the GPU sounds neat! However, it is such a different enough environment that you'd really want to tailor the runtime to that specific use case rather than try and bolt on GPU support to an existing runtime that wasn't designed for GPU interaction.

@SkillfulElectro, I encourage you to create a new Wasm runtime that is specifically designed for executing Wasm on the GPU. I think that would be the best way to leverage the GPU's massive parallelism and not end up with a frankenstein monster that both (a) doesn't realize the ideals you're aiming for and (b) becomes a huge maintenance/portability burden. Such a new project would be super cool, and I would be rooting for its success.

However, I don't think it makes sense to bolt Wasm-on-the-GPU support onto Wasmtime, which is definitely not architected for such use cases, and I highly doubt it would hit the goals you've laid out.

Therefore, I'm going to go ahead an close this issue, but folks should feel free to continue discussion and brainstorming here if they find it useful (at least until the dedicated Wasm-on-the-GPU runtime project is kicked off at which point there will be a better forum for these discussions).


Last updated: Nov 22 2024 at 17:03 UTC