Hi everyone.
I'm new here (via @Lin Clark)... already started a thread under #wasi on some initial pytorch/libtorch wasm experiments my collaborators and I are experimenting with (apologies for the basic questions at this point).
Hope to connect with others on efforts to bring machine learning to wasm.
Sounds like @Till Schneidereit @Mingqiu Sun and @Andrew Brown are the ones to connect with on this topic? Congrats on the 2208 PR - seems like the ball is rolling in this space.
@Austin Huang, nice to meet you; are you interested in trying out wasi-nn using Wasmtime?
@Andrew Brown yes though i'm still learning the basics at this point. also probably need to catch up on the design discussions to date (do you have any links to check out so I don't rehash what's already been discussed?).
As I understand it it's focusing on a runtime for compute graph inference, right? What are the paths to get something compiled down to an artifact it can consume? is there a compilation path, from say, onnx or torchscript?
Well, there's an example script that should give you some idea how to perform the inference and all of those artifacts were generated from the instructions I documented here
I am working on a blog post that should be a better explainer for how to use wasi-nn so there's also the option to wait for that :smile:
@Andrew Brown interesting. why is wasi-nn described as a library here? i was under the impression it's more of a standard the runtime implements.
i haven't used openvino but it sounds like it can consume onnx as a common denominator. could there be a more direct path (from say pytorch or tf? tbh i hear more about other IRs (xla, mlir, onnx ...) ... though i get the IR space is pretty fluid so gotta start somewhere i suppose.
Not sure where you see wasi-nn being described as a library (let me know and I will fix!). It is a specification so I should be using "spec" or "standard," though there might be confusion because wasi-nn is implemented in Wasmtime using the wasmtime-wasi-nn crate. That is the crate talks to OpenVINO to actually perform the inference.
"Not all libraries (e.g. wasi-nn) will know how to decode images..." https://gist.github.com/abrown/c7847bf3701f9efbb2070da1878542c1 maybe i'm misreading
Cool, let me fix that... As for whether OpenVINO supports other IRs, take a look at their model optimizer documentation--there is a tool that allows for converting one IR to another (not sure if it covers exactly what you're looking to do?)
will have a look. There's a lot of different things I'd like to do :) but my immediate aim is to wrap my head around the scope of wasi-nn (what's in and out of scope) and how to interface with it from a programming langauge runtime (C++ will be a focus for now since I use libtorch)
Sounds good, let me know if you run into issues; here's @Mingqiu Sun and I talking about wasi-nn: YouTube - Introducing WASI-NN.
Last updated: Dec 23 2024 at 12:05 UTC