Apologies if I'm using the zulip wrong, it's my first time.
Hello, me and my friend @Jack W saw your keynote (YouTube - Keynote: The Path to Components - Luke Wagner, Distinguished Engineer, Fastly). We're both really excited about it, because we'd been discussing something similar for the past few months. It would have a very similar WASM component-based, language-agnostic execution model, except we'd specifically use it to make a new kind of browser for web-like applications. We thought the problems we could solve with the web were:
The way we intended to solve it was with a highly generic WASM-based execution model that still split things into components, similar to what you've laid out. We'd combine that with a "caching" system that would communicate hashes and semvers with the origin server and specialised closer caching servers so that only what needed to be downloaded was downloaded (both different versions of existing components and new components). Finally, we'd only expose low-level APIs, particularly in the area of graphics. It'd be something similar to WebGPU, possibly simply WebGPU.
Applications would use community-built rendering frameworks that'd go anywhere from HTML-like DOMs to game engines. The actual browser wouldn't have to worry about that, and so could stick to being relatively lightweight, particularly compared to modern web browsers. Of course, this reliance on frameworks would mean large download sizes, but the intelligent caching system would handle that.
Now we've discovered this project, we've been considering implementing a browser that uses it and adding in the necessary tech to make it its own standalone webapp-like system, without needing core spec changes, using the ideas we'd had previously.
Just to clarify, we were hoping to contribute to WASI and the component model to get it to a stage where a web browser and caching system could be built, how can we contribute to help achieve this?
Nice! I've thought a re-imagining of the web browser is overdue, especially along the lines of exactly what you've outlined: expose only low-level APIs from the host, and let people build abstractions in untrusted third-party libraries. I think WebAssembly components are a great foundation for that, especially because the security model keeps components isolated from each other except through their explicit public interfaces.
I'm not following this closely, but if you're interested in using the Wasmtime runtime, I know we have an open tracking issue for ongoing component model work: https://github.com/bytecodealliance/wasmtime/issues/4185
By the way, I might suggest pure content-addressed storage (looking up a component by its hash) instead of making semver and dependency resolution part of your architecture. That would mean you can delegate caching to either existing HTTP implementations or systems like IPFS.
This is indeed really interesting; one thing I've thought about for a while (and talked about with others) is whether it might eventually make sense for the JavaScript implementation in a browser to become a Wasm module (as an interpreter), or for JS to compile to Wasm. We've got the former with the SpiderMonkey.wasm port, and the latter is something we're actively thinking about. It's far from the hardest part of what you describe here but it would be an integral part of it. Happy to see where this goes!
cc @Luke Wagner for some related discussions around the above
In addition to experimenting with the component model, an interesting project would be converting Web GPU's Web IDL-based interface into a Wit-based interface (which probably wants handles/resources to be implemented first...), binding to wgpu-rs and, in general, trying to reuse all the browser's work hardening WebGPU against untrusted content.
While I think it's valuable to let 1000 flowers bloom in this space of new ways of using wasm outside the browser, one word of caution from having seen many discussions of this sort of idea over time at Mozilla is that, despite all its flaws and warts, the DOM/CSS approach has a lot of pretty useful properties that aren't obvious on first glance. Fully capturing these properties with all the requisite nuance would take an essay and more time to write, but they span: accessibility, content-blocking, consistency, battery life, size, advanced text handling, progressive enhancement, responsiveness and layerization. A challenge question is: how would a new wasm + low-level-UI platform produce a significantly better UX than what we all experienced 15 years ago with fully-Flash UIs. I don't want to say it's not possible to avoid this outcome, but it's certainly something you want to have a well-thought-out answer for.
Luke Wagner said:
despite all its flaws and warts, the DOM/CSS approach has a lot of pretty useful properties that aren't obvious on first glance... how would a new wasm + low-level-UI platform produce a significantly better UX than what we all experienced 15 years ago with fully-Flash UIs
Point taken.
The closest thing I can point to is the xilem architecture, which, raph mentioned has accessibility covered already, and the render cycle seems pretty efficient. (though personally I don't see a good reason for not having a setState
thing)
Should we get Raph in here?
I've been thinking the same thing. It seems like components is on the right path for this.
I don't think "Caching" is the whole solution to dependency bloat. It's a congestion problem, and congestion doesn't have obvious elegant solutions. Congestion is painless until it has gone too far, and once it goes too far, it's not obvious who's most responsible for it.
Some packages will be larger than others, but sometimes they simply need to be, they will always have excuses.
Last updated: Nov 22 2024 at 16:03 UTC