:wave:
A major additional tooling building block that is in the works is first-class debugging support in Wasmtime
If I wanted to contribute to one of these, how would I? re: wit expressive
can those wit-expressive changes happen in parallel?
I'll be talking a bit as well about parallelization of implementation and a general path for how features are implemented too
nothing major different from what Luke/Bailey already said though
Are non-exhaustive enums planned as well?
My impression for enums is that'd be more-or-less under the "function subtyping" category which has a number of open questions, but would probably end up being that moniker (e.g. as a form of WIT evolution)
in the Zoom chat, @Victor Adossi jokingly said "The bikeshedding starts now right?"
I think there's an important kernel of truth here: the idea for a Component Model 1.0 is to achieve stability for decades, not years. So we certainly want to get things as right as possible, and very much appreciate input!
Great point -- so far it's a small miracle that no showstopping design issues have occurred, thanks to collaboration and good careful painting.
The JS event loop basically does cooperative threads, right?
Not entirely, no: JS doesn't have the ability to suspend entire stacks (instead of single frames for async functions), so if you need to suspend an entire call stack, you have to build that in-content yourself. JSPI does integrate stack suspension for Wasm into the JS event loop
For Sy: Are there places that folks can start on work? Like for the guest languages? Is it about ready for that?
I'm expecting wasi-libc pthread support will be the foundation for most guest toolchains
(i.e. those toolchains will need to wait for at least an experimental, thread-enabled wasi-sdk release)
^ extending on that, is that roughly how rust will work? Shoutout to your work here: https://github.com/tokio-rs/mio/pull/1931/
Yes: wasi-libc first, then rust std, then e.g. tokio, etc.
for preemptively scheduled, parallel multi-threading, this proposal and this paper are relevant
I gotta run but look forward to watching the rest of the sessions later! Folks can feel free to ping me here with and coop threading questions
Per Alex's presentation: componentize-py does also use the same steps, but mostly hides them behind the scenes for a simple UX
Reminder for folks: feel free to take a 15 minute break now! Otherwise, stick around for some watercooler chat :sunglasses:
When will we opt to break apart wasi and CM versions (right now coupled to decrease release/maintainer/stability overhead)?
where is map in this iteration/feedback process? What will be the point where we say it's "shipped"?
Link to slides for my presentation coming up next: https://docs.google.com/presentation/d/1bYT6MOHOHdRomY1xvLwld4TEFWNyh9xwnE1zTwhJ0eQ/edit?usp=sharing
From Zailm in Zoom chat: "I don’t know if it’s the right/best session for that, but I’m wondering about plans re eh and gc in wasmtime. When we can expect it moved to tear 1 (on by default)?"
@fitzgen (he/him) we still don't have cycle collection in Wasmtime, correct?
https://docs.wasmtime.dev/stability-wasm-proposals.html has info on the blockers for enabling GC by default as well
for alex -> but exception-handling is experimental in wasi-libc on LLVM patch. Will you share some status there?
can do
(basically in your flow diagram, I think we're essentially at the guest language stage + wasmtime hardening)
Joel Dice said:
@fitzgen (he/him) we still don't have cycle collection in Wasmtime, correct?
correct, although this isn't necessarily a blocker for turning it on in wasmtime
FYI: extensive discussion about how we can tune Wasmtime for linear memory + exceptions embedders who don't care about GC: https://github.com/bytecodealliance/wasmtime/issues/11256
@Chris Fallin yeah that issue ^ is what I was thinking of in terms of we may want to tune things if we ship exceptions w/o GC but definitely not a major new implementation
and if it's a huge amount of work to ship exceptions w/o GC I also don't think it'd be worth it
Joel is only working on half of the toolchains
working on the tools; someone else will have to do the chains
Browser JSPI support also made it to interop 2026, which means browsers are like very publicly committing to implementing it this year. (See e.g. WebKit: Interop 2026 - JSPI)
From Zalim in Zoom chat: "There’s implementation for stack switching in v8. Also, we have PoC in Kotlin."
By the way: what chat should we be posting in? There's both this one and #Events > Plumbers Summit - February 2026 , but I don't know what the difference is :sweat_smile:
You can think of this one as the live chat, and the other as the hallway track.
ok!
Sorry I think I added confusion... I figured the other was more logistics-related so made this for ad-hoc stuff
(though in practice I think they're not clearly delineated)
@Joel Dice I'm happy to talk to you about wevaling Python at some point; I played with this a little bit when writing the paper after doing the initial SpiderMonkey work. I think it's possible, requires a few refactors in the interpreter
(they're not, and it's perfectly okay to use either)
(the usual caveats about dynamic types as a limit of non-JIT approaches still apply -- I don't remember if CPython has ICs or not)
In addition to the status descriptions Joel is just talking through, I gave an update on StarlingMonkey yesterday. It's not just about WASIp3, but that is part of it
(I'm keeping an eye on both the Zulip chat topics, and the Zoom one, so we have one unified conversation here recorded for posterity. Asking anywhere is fine!)
Question for Joel: Can you speak more to the C++ exceptions requirement for Python? That part wasn't clear to me
re: weval in StarlingMonkey, yep, Tomasz's rebase of my patches appears to be passing everything now (the bug I had been seeing earlier wrt LLVM changes has gone away)
@Alex Crichton IIUC Python C Extensions fundamentally require exceptions support. Either that, or some of the most important ones simply use exceptions in their implementation
rustls would need to use a wasi-crypto; native-tls could do wasi-tls
Also Re: Rust support - the Rust project is now tracking the component work as part of the 2026 roadmap: https://rust-lang.github.io/rust-project-goals/2026/wasm-components.html
re: Java, do we have any info about Google's efforts to compile Java to WasmGC in the browser? Is it a subset/dialect?
Google is pretty into https://github.com/google/j2cl/tree/master
From Zalim in Zoom chat: "JFTR: Kotlin: we resumed our work on CM support, have prototype, working a new modernized version, aimed to ship it"
https://opensource.snarky.ca/Python/WASI/Plans has my TODO list which has stuff people can help with
yay, 3 minute break!
@Brett Cannon that is a fantastic format—and a great plan! :heart:
StarlingMoney is my rapper name.
Offtopic: Any plans to provide an ability to load multiple wasm modules/files in wasmtime and other standalone VMs? Maybe something like esm-integration?
No current plans, but I think adding something like that could be reasonable
Most of our existing use cases are primarily embedding-driven in the sense of we don't have a lot of major users of the wasmtime CLI outside of hobby/test/etc things. That effectively doesn't motivate a lot of feature-development work just for the CLI, which this would fall under, but doesn't mean we wouldn't want it
another interesting place to extend could be jco serve where something like ESM integration would be easy
Couldn't it be useful for embedders as well?
In Kotlin, in the long term, we consider moving into generating multiple wasm modules instead of a monolithic one. And a lack of such a possibility in standalone VMs is one of the blockers.
The general shape that this would have is that wasmtime would, via import names, auto-compose a bunch of components together and then run that final component. To the extent that this would be useful for embedders it would be to share this composition functionality, but otherwise I believe some hosts already do this where native host APIs are implemented in terms of host-provided components.
I'd love to chat more though about your use case though, would you be up for opening an issue on Wasmtime for this?
Sure! :+1:
to emphasize a key point Alex is making: the solution to linking multiple modules and components is the component model! Adding support for another ad-hoc system for setting up compositions wouldn't really help the ecosystem overall, unless it's also standardized. But it's not entirely clear why we'd need that on top of the CM—with the ability to load external .wasm files references from within a component being a key thing we need to make turn-key and usable from wasmtime serve, etc
I have wanted --preload for components in the wasmtime CLI before in certain situations, seems reasonable to support for local/incremental dev situations
I'd envision this personally as a sort of filesystem/import-string driven auto-composition which wasmtime would then run, which I agree would be useful to have
@Zalim Bashorov (Kotlin_, JetBrains) when you talk about multiple modules, are you thinking of runtime instantiation (i.e. loading and instantiating a module on-the-fly from an already-running Wasm app)? If so, that's something that the component model _will_ address, but doesn't yet (nor does any other standard AFAIK).
If all the modules are known ahead of time and instantiated at the same time, then the component model as it exists today should be sufficient.
yeah, that's what I mean. I think a --preload wouldn't quite work in the same way as for core modules, since you need the instructions how to do instantiation and linking and such
Thanks to all speakers and organizers! Great sessions packed with interesting information!
@Joel Dice in general, we want to have both static and dynamic loading, but for outside of the browser, it's mainly static (known ahead of time).
Does wasmtime support loading components from multiple files, or should they be merged first?
Currently they're required to be composed (merged) first into a single component, the CLI only knows how to run components which import WASI interfaces. If a component imports something else then it won't be runnable
I still think something like --preload would be nice because it requires less activation energy than creating a directory tree of the exact right shape
regarding instantiation and linking and --preload, I'd think that every --preload would just insert into the wasmtime::component::Linker and we would ultimately just propagate any linking errors if any
I don't think it would need to have the full expressivity power of the CM, but it would be really nice for simple, quick-and-dirty cases
oh, I see! Yeah, adding any exported interfaces to the linker would make --preload make a ton of sense, and indeed quite useful!
@Zalim Bashorov (Kotlin_, JetBrains) See Linking.md for a detailed discussion of the various ways the component model supports linking (today and in the future).
Where can I find the day 2 zoom link?
Same as yesterday: https://zoom.us/j/91321485453?pwd=nFkgarljzNRhN23qbNzeT92jCgq5Wj.1
Wait, is what Nick is describing made to enable uring-like direct access APIs?
sounds like mmap, too, yeah
it can be used for that. But also for things like get-pixel(x, y) -> u32, used in a tight loop
it's made to expose the host buffers we have in our embedding -- the core engine manages buffers that network devices DMA into and out of, and we want to use those directly, for zero copy from wire to wire
you can also potentially do io_uring-style zero-copy with CM streams by using guest memory for io_uring buffer allocation
ohhh, very nice
Yep, if you control the host side and buffer allocation, you can indeed do that. In our embedding (and in I suspect other use-cases as well) we're slotting into an existing system where we can't take over buffer allocation -- we are just given a pointer -- so this is designed to allow for that flexibility
Maybe noob question: What's preventing the (unsafe) Rust host code from being inlined directly? Opposed to rewriting it in a custom IDL
that's a really interesting holy grail! right now, different compilers. one could see a future where cg_clif is used to build host code into CLIF that we keep around to inline, maybe -- but there are lots of caveats there wrt ABIs, etc, and also Wasmtime API design (we opted not to expose CLIF as an IR that we take for now)
relevant: https://github.com/bytecodealliance/wasmtime/issues/12311
Relevant issue https://github.com/bytecodealliance/wasmtime/issues/12311
Hahaha
Jinx!
the zero-copy-buf interface would require hosts to support compile-time builtins to implement, right?
If the CM has a native concept of remote buffers (https://github.com/WebAssembly/component-model/issues/369) that can be optimized to heck, how much of the overhead remains?
zero-copy-buf without inlining is "zero-copy" in a weird sense because it's giving you the u8s one call at a time; the "money transfer structuring" cheat of the buffering world
I wonder if lazy-lowering could introduce support to lower stream chunks into a zero-copy-buf like thing, as a canon-opt?
I guess more generally for list<u*> as well
Again, though, if you _can_ just allocate your buffers from guest memory to begin with and let the host use that directly, no fanciness required, that should be the default choice IMHO.
(it does imply magic host powers and isn't virtualizable, though)
that requires much more massive change to everything around it, though, and isn't always feasible at all
(e.g., IIUC Tokio/Hyper don't make this possible, so the "massive change" here would be moving to another async runtime)
Yeah, Tokio/Hyper will need a different API to take proper advantage of io_uring anyway, nevermind the Wasm aspect.
Question: How optimized is the current async state transition stuff? Is it at a "first implementation" level that satisfies the spec as cleanly as possible (i.e. optimized for spec compliance rather than performance). Or is it already optimized decently?
Given the huge different in bar heights on the graph, I'm just trying to understand if the "big bar" just a effect of the current implementation, or it is necessary by design
Slides for my compile-time builtins presentation: https://docs.google.com/presentation/d/1MtIcykgj8zkTlrDfYoNBtVyJxjDXLDbqHo6JmL3TdVQ/edit
@Scott Waye
What do you think will be the safety criticism with implementing these builtins and exploits? Is it limited to the implementers of the wasm runtime , wasmtime/browser or does it extended to source language to wasm compilers, e.g.clang?
the safety of compile-time builtins relies upon:
Doing anything nontrivial in a signal handler is scary
Almost an MVCC for stack frames! Nice! :-D
DAP support! \o/
Would this debug wit interface be wasmtime specific?
This is all very cool and warms my debugger-enjoyer heart
Chris is only implementing it for Wasmtime (afaik) but there is nothing in principle from stopping another engine from also implementing it
long-term it may make sense to standardize, or add to tool conventions, but I'd be hesitant to do that too early
Will languages be okay with implementing a non-standard interface?
LLDB struggles representing things like Rust enums during debugging; how well is this able to represent items like that?
Mendy Berger said:
Will languages be okay with implementing a non-standard interface?
for the time being, languages won't be implementing this interface themselves (unless they are really excited to)
Got it. Very exciting!
Yosh Wuyts said:
LLDB struggles representing things like Rust enums during debugging; how well is this able to represent items like that?
fortunately (?) that problem is orthogonal to the guest-debugging work -- LLDB gets a view of Wasm memory and locals at a low level, and uses the same logic to interpret that into source-language values that it does on native platforms. So if it gets better on (say) x86, it'll get better on Wasm too
Will languages be okay with implementing a non-standard interface?
I'd expect producers and debugger UIs (IDE?) to first focus on common/standardized ways, and only then adopt custom things to implement advanced features if required.
I wonder if VacantEntry::try_insert could be upstreamed
Support for fallible allocations is unfortunately a pretty hairy ball of questions in upstream Rust so while any one particular API is probably not too bad the general story for OOM-handling is likely to stymie much progress
bee-forest :bee:
oomTest FTW (https://searchfox.org/firefox-main/source/js/src/builtin/TestingFunctions.cpp#10343-10365)
If you want I can put you in touch with the Rust for Linux folks; they have very similar fallible allocation needs as Wasmtime and might be interested in collaborating on something like "try-std"
FuturesUnordered mostly just wraps a hashmap, data structure-wise, IIRC, (EDIT: maybe not, just looked and it's more complicated than that) so maybe we could start by forking our own version and making it OOM safe?
oomTest works great for finding a lot of OOM handling issues. The hardest ones to find are the cases where you are allowed to retry an action after an OOM. You can end up in a situation like:
1. First call to a method:
a. Mutate some fields
b. Try to allocate something and it fails with OOM
c. Fail to unwind the first mutations
2. Second call to a method:
a. Observes inconsistent state where some fields were mutated, but not allowed
oomTest by itself doesn't always find that.
If y'all switch to futures-concurrency, I'd be happy to add support for fallible allocation behind a flag: https://docs.rs/futures-concurrency
The equivalent to FuturesUnordered is FutureGroup
I wonder how much work the wasmtime-wasi-io step is and if it's worth considering scoping to just p3.
Reimplementing std for fun and profit
minus the "fun" part and minus the "profit" part :face_with_open_eyes_and_hand_over_mouth:
/me cries in async-std
how much work the wasmtime-wasi-io step is
small compared to a lot of the other stuff nick has done as part of this effort, imo
at any rate, for the embeddings nick is targeting, we will be shipping p2 so we will need a wasmtime-wasi-io solution even after we also offer p3
A little bit off-topic to the agenda, but of interest to folks here. We (Mozilla) just published a blog post about Wasm Components on the Web:
https://hacks.mozilla.org/2026/02/making-webassembly-a-first-class-language-on-the-web/
(https://news.ycombinator.com/item?id=47167944)
Please share it wide! Or reach out with any questions/thoughts.
one bonus bit of demo I forgot to show from my talk, for anyone curious -- disassembling and single-instruction-stepping (at the wasm bytecode level!) does work in LLDB too:
* thread #1, name = 'nobody', stop reason = breakpoint 2.1 09:45:15 [42/1898]
frame #0: 0x400000000000023e wasm`fib(n=<unavailable>) at test.c:12:9
9
10 __attribute__((noinline))
11 int fib(int n) {
-> 12 int a = 1, b = 1;
13 for (int i = 2; i < n; i++) {
14 int sum = a + b;
15 a = b;
(lldb) disas
wasm`fib:
[ ... ]
0x4000000000000239 <+16>: local.get 0
0x400000000000023b <+18>: i32.store 28
-> 0x400000000000023e <+21>: local.get 1
@Ryan Hunt hoooray, this is SO EXCITING! :partying_face:
that's cool
slides for my out-of-memory (OOM) handling in Wasmtime presentation: https://docs.google.com/presentation/d/1HsaiLGZ4d_WvKrxYFqSGfQn3jjl3l3Xvu9XG4U5K7z0/edit
Ryan Hunt said:
A little bit off-topic to the agenda, but of interest to folks here. We (Mozilla) just published a blog post about Wasm Components on the Web:
https://hacks.mozilla.org/2026/02/making-webassembly-a-first-class-language-on-the-web/
(https://news.ycombinator.com/item?id=47167944)Please share it wide! Or reach out with any questions/thoughts.
awesome!
posted to r/webassembly too: https://www.reddit.com/r/WebAssembly/comments/1rfh62p/making_webassembly_a_firstclass_language_on_the/?
Slides for my talk from today: https://docs.google.com/presentation/d/1dgVsnnNLJG1vIfeZsV81pmZTQz9brRVGbEiueKky72c/edit?usp=sharing
idle question: should we allocate resource tables with the engine's InstanceAllocator? (which would allow pre-allocating them, with simple/obvious limits, in the pooling allocator)
longer-term, this seems related to the allocation combinator traits stuff I've been floating at various wasmtime meetings
Sort of sort of not I think, it's becoming more and more of a central resource which I think would make sense to put in there, but values in a table often have host-backed memory e.g. http::HeaderMap
so would help, but not a silver bullet
/me nods
For our own hostcalls, I've gone the route of having a proxy in front of the ResourceTable that tracks allocations, but that can't extend to code reused from wasmtime for wasi calls as these depend on a concrete &mut ResourceTable.
Add something like that natively to Wasmtime I think would be reasonable as well, it's not something we could backport but I think would be reasonable to consider going forward
or hooks in StoreLimiter or something
FYI: wasi-sockets' constructor is fallible, but doesn't have a "limit reached" error code
(weird brain tangent) somehow "888, 889, 890" sounded like a phone number at the end of Pat's talk -- "call this number if you think you may have resource exhaustion issues"
If you or a loved one has been a victim of resource exhaustion...
You have 99 problems and they're all resource exhaustion issues
parsing the output of wit-bindgen :mindblown:
The Rust's syn crates is particular useful here, which gets us an AST that we can analyze.
Here is the tool link: https://github.com/chenyan2002/proxy-component
This is very very cool. I'd love for us to start using it to populate the specifications/ dir in the WASI repo
Which I assume is exactly why this tool has been written ^^
very cool
and also populate wasi.dev with this
Question #1: How do you see the POSIX'y interfaces (like filesystem, sockets, ..) being specified? I like the level of detail of the web specs, but for those POSIX-like interfaces it ultimately boils down to: we expose whatever the OS does, with all its quirks and oddities.
Question #2: Do we need to maintain this separately from the WIT files or can it be integrated into the WITs themselves or vice versa? To remove the double bookkeeping
It's also interesting to consider the way this could link back to wasi-testsuite, where each step in the spec ends up referring back to a test and vice-versa.
@Yosh Wuyts yeah, I've been thinking about that, but don't yet have a good answer for how exactly to do that in a low-overhead way
THanks a lot for the great talks!
@Yosh Wuyts oh and yes: that is indeed what I wrote this for :slight_smile:
@Till Schneidereit you might like https://github.com/bearcove/tracey - it has some really good ideas for two-way linking between specifications and tests, and implementations and tests. The general approach is apparently modeled after some of the big fancy tools used in the safety-critical industry.
Ty for the plumbers summit! That was heaps fun!
oooh, I hadn't seen that; looks very relevant indeed! Thank you for sharing :heart:
one thing I forgot to mention about specwit: the online editor uses JCO to run a componentized version of specwit in the browser—which worked on the first try :heart:
Last updated: Mar 23 2026 at 16:19 UTC