JITs like LuaJIT and SpiderMonkey need to be able to generate code at runtime, which runs into the limits of wasm. (Chris Fallin has an excellent blog series on how StarlingMonkey gets high performance using AOT or pre-generated stubs, but a full JIT that can link new wasm code would help get maximum performance for some jobs.)
How might dynamic linking of new code at runtime work as a WASI API?
Supposing this dynamic linking added a new function to a table, my guess would be that the new functions would be expressed as component model types — but if performance is a concern, hosts may not want the overhead of a component linker and validator when they can get away with core wasm. Could there be a way to import newly generated core modules at runtime without having to wrap them in a component first?
Generally, would a "wasi-codegen
" world be in scope, and what might it look like?
There's at least some discussion of this in the original Wasm design docs (see here and here for example); I suspect an API that allows one to add individual functions, given the bytecode, is probably about the right abstraction and overhead level
One can more-or-less do this by trampolining through JS on a web Wasm host (browser) today, and AFAIK some folks are already doing this (it's technically a new module with one function, and one has to use exports/imports to allow the new code to access the memory and functions of the existing module)
whether a new proposal is an API or a core Wasm instruction is a subjective design question, I'm sure folks would have opinions but it doesn't seem to matter too much technically (IMHO)
Also, Emscripten has been supporting module-level dynamic loading via dlopen
/dlsym
based on https://github.com/WebAssembly/tool-conventions/blob/main/DynamicLinking.md for a while now.
the big social/political aspect here would be convincing folks that this is needed with some concrete example/prototype -- performance data speaks loudest to engine implementers who have lots of options of extensions to work on :-)
Performance is one motivation; another is compatibility with ecosystems such as Python's, where dlopen
ing native extensions at runtime is commonplace.
(although in that case the code isn't usually being generated at runtime)
Thanks for those resources on existing work.
My (uninformed) impression of Python for wasm is that it's difficult with a dynamic language to know what native extensions you'd need in advance. I was mentally grouping it with the runtime-generated situation, because I don't know whether you'd be able to enumerate the native extensions beforehand. Perhaps that's a question for Pyodide and CPython though.
componentize-py
currently uses a conservative approach when constructing a component that uses native extensions: it searches the Python search path at build time for WASI-compatible extensions and bundles them into the component, synthesizing lookup tables which may be used at runtime to resolve the exports via dlopen
/dlsym
. See here for details.
You're right, though, that the conservative approach potentially bundles extensions that won't necessarily be used at runtime. And it wouldn't support code that e.g. downloads and loads extensions at runtime rather than including them as part of the package.
Chris Fallin said:
One can more-or-less do this by trampolining through JS on a web Wasm host (browser) today, and AFAIK some folks are already doing this (it's technically a new module with one function, and one has to use exports/imports to allow the new code to access the memory and functions of the existing module)
the big social/political aspect here would be convincing folks that this is needed with some concrete example/prototype -- performance data speaks loudest to engine implementers who have lots of options of extensions to work on :-)
We do it via JS in dotnet in the browser. You can read some details here https://github.com/dotnet/runtime/blob/main/docs/design/mono/jiterpreter.md
This is not normal dotnet JIT. It's rather letting the browser's JIT to optimize sequence of instructions of the Mono interpreter.
It gives us significant perf boost in hot path scenarios. To quantify it to single number is not good idea, because it heavily depends on the use-case. But overall it's definitely worth it.
Compared to Mono AOT -> .wasm, the final app has much smaller download size, which matters a lot in a web page.
It would be technically possible to implement WASM backend for RiuJIT and use this JS escape hatch too. But that's multi year project.
It would be even better if WASI runtimes allowed for something similar.
What kind of metric would convince wasmtime to implement support for it ?
Ah, I think for wasmtime specifically, we're already pretty convinced it can be a good idea in the right cases; the limiting factor is engineering time (we are an extremely small team and very overburdened already). So if others (especially at large companies with lots of engineering resources!) want it, we'd be happy to help design it and review PRs
There's also the standards-compliance angle; we'd want to build it as hostcalls and not call it "wasi-" anything unless/until it went before the relevant standards groups
Last updated: Dec 23 2024 at 12:05 UTC