Stream: general

Topic: GPU support with WASI


view this post on Zulip Ransford Hyman (Apr 05 2023 at 00:11):

I'm curious to know implement capabilities to execute GPU code was on the WASI radar? All of the efforts around WASM/WASI has been around CPU compute, but a great deal of ML/AI is done on GPU. So wandering where the community stands in regards to this effort :smile: I posted this in #wasi-nn but got no traction

view this post on Zulip Till Schneidereit (Apr 05 2023 at 15:27):

there's been talk about a wasi-webgpu interface as a potential answer. Nobody has spent the time to actually pursue this to my knowledge, but I think it'd be quite well received

view this post on Zulip Notification Bot (Apr 06 2023 at 03:42):

Ransford Hyman has marked this topic as resolved.

view this post on Zulip Notification Bot (Apr 06 2023 at 10:29):

Ralph has marked this topic as unresolved.

view this post on Zulip Ralph (Apr 06 2023 at 10:29):

definitely, webgpu would be a great starting point; I know we'll eventually want gpu support outside of web, not sure how those capabilities would align or not.

view this post on Zulip bjorn3 (Apr 06 2023 at 12:23):

The wgpu crate implements an api similar to webgpu. This is what firefox and deno internally use to implement webgpu and it also has a webgpu backend when compiling to webassembly for usage in the browser. wasi-webgpu support in wasmtime could use it too. And for a non-rust wasm runtime either the C bindings of wgpu or dawn could be used (chrome uses dawn to implement webgpu)

view this post on Zulip Till Schneidereit (Apr 06 2023 at 17:48):

yeah, to be clear, I meant using an API that's as closely modeled on WebGPU as possible, and can be fully implemented in terms of WebGPU in the browser, but can also be fully implemented in non-browser environments

view this post on Zulip Ransford Hyman (Apr 06 2023 at 20:34):

The scenario that I was thinking was in a non-browser environment. Given that a large amount of ML runs server side, having a WASI workflow would be an attractive option :smile:. It could alleviate the Python lock-in that the ML community has today :wink:

view this post on Zulip Ransford Hyman (Apr 06 2023 at 20:35):

Also it would be interesting to see how something like this would improve startup times

view this post on Zulip bjorn3 (Apr 07 2023 at 00:26):

Have you seen wasi-nn for ML?

view this post on Zulip Ransford Hyman (Apr 16 2023 at 17:52):

bjorn3 said:

Have you seen wasi-nn for ML?

Yes I have. But it was unclear to me whether it supported GPU execution or not. Seems like there was some initial effort but it is still incomplete.

I'm still grasping the understanding of the limitations of WASM and what capabilities does WASI provide. Given the secure nature of WASM, I wasn't sure that access to an external device like a GPU was possible without the WASI component model.

It seems like GPU execution is heavily relied on the WebGPU feature. But I wasn't mapping how that would work in a non web environment

view this post on Zulip Matthew Tamayo-Rios (Apr 17 2023 at 20:17):

It's a work in progress. The currently supported backend is openvino which only supports intel gpus and hardware accelerators (VPU i.e TPU), but there are outstanding PRs to add a Tensorflow backend. It will take us a bit, but we should get there in a reasonable amount of time.

view this post on Zulip Jorge Prendes (Nov 09 2023 at 10:28):

Has there been any progress on this?
Is there a component based wasi-webgpu proposal?
(sorry for the necrobump!)

view this post on Zulip Mendy Berger (Feb 02 2024 at 14:43):

This is what I've been working on.
https://github.com/MendyBerger/wasi-webgpu

Contribute to MendyBerger/wasi-webgpu development by creating an account on GitHub.

view this post on Zulip Scott Waye (Feb 02 2024 at 15:45):

This looks fun!

view this post on Zulip Scott Waye (Feb 02 2024 at 15:50):

I did this a while back with emscripten, shameless copy of Google's example, would be fun to try to port. https://twitter.com/yowl00/status/1618728735875932171


Last updated: Nov 22 2024 at 17:03 UTC