I am looking to build up my Wasm research library. Does anyone have a list of favorite Wasm research papers? In particular, I'm interested in isolation of modules/components from each other or the general security stature of Wasm (or runtimes). But curiosity has no bounds
shameless self-promotion for me but some academic collaborators and I have a paper in ASPLOS'24 about verification in Cranelift, where compiler correctness is load-bearing for Wasm sandboxing: https://cfallin.org/pubs/asplos2024_veri_isle.pdf
there's also VeriWasm, a static analysis to prove memory isolation (https://cseweb.ucsd.edu/~dstefan/pubs/johnson:2021:veriwasm.pdf), though I'm currently building a proof-carrying-code mechanism in Wasmtime that subsumes/replaces it
I think I showed you this one already, Kate, but here it is again: https://cseweb.ucsd.edu/~dstefan/pubs/narayan:2023:hfi.pdf (a proposed hardware extension for lightweight isolation, with references to prior art). And, yes, there's Chris Fallin again in the author list :)
HFI sounds extremely useful, even beyond Wasm. Any idea how long these sorts of features take to end up in production chips? Or too variable to guess?
@Tal Garfinkel might have some insight about that. See also his WasmCon presentation: https://wasmcon2023.sched.com/event/1PCMa/bridging-the-architecture-divide-how-hardware-limits-wasm-and-how-we-can-do-better-tal-garfinkel-uc-san-diego
@Jeff Parsons time scale is typically low-order years for new ISA extensions from proposal to first silicon once a CPU vendor decides they want to do it; variability is mostly from the "convincing them they want to do it" part (and as one could imagine there's a limited budget for New Stuff in each generation so it has to be important enough etc)
as Tal mentioned, conversations are happening with various folks, but at least I have no idea what the outcome will be; hope for the best, optimize the software in the meantime :-)
Thanks for that link, Joel. I'll watch the presentation when I get a chance.
And thanks for the extra context, Chris. I can certainly understand that even if it seems obvious from computing trends that something like this is important, there's still a big step from that to "something like this" is "exactly this", and that it's more urgent than whatever other new features are waiting for their turn.
on security, I'll suggest a very interesting one by some researchers who categorized bugs in runtimes and then created a fuzzer that mimicked those approaches to discover new bugs in these categories. Very interesting: https://arxiv.org/pdf/2301.12102.pdf
Another one is rWasm, which proposes compiling a .wasm binary to a native one with an informal proof of memory safety for the host, simply by transpiling it to safe Rust first. I'm working on a thesis partly based on this :) The paper also describes a formally proven memory safe wasm-to-x86 compiler written in F*.
"Put Your Memory in Order: Efficient Domain-based Memory Isolation for WASM Applications" https://dl.acm.org/doi/10.1145/3576915.3623205
Our insight is to use MPK hardware for efficient memory protection in WebAssembly [...] Our evaluation shows that \tool can prevent memory corruption in real projects with a 1.77% average overhead and negligible memory cost.
While I find it continually interesting that people fail to understand how security boundaries work in wasm (which is required to understand properly where they do not match your expectations), I generally like this paper, but without an understanding of where it doesn't use MPK as an implementation, I'm much less excited.
for example, park-libmpk.pdf (microsoft.com) has also worked over this area outside of wasm
all that said, it would be an amazing contribution to take that paper (imho) and understand how it could be done across architectures. this comment is the one I want to read about more. :-) Alternatives to Intel MPK. Our prototype of PKUWA uses Intel MPK, but the DILM model is not hardware-specific. It works with any memory protection scheme similar to MPK, such as those in ARM, RISC-V, PowerPC, and Itanium CPUs [5, 13, 16, 57].
Nothing but admiration for the work! I love these kinds of things.
Last updated: Nov 22 2024 at 16:03 UTC