Stream: wasi

Topic: Unbounded collections on the WASI interface


view this post on Zulip Pavel Šavara (Feb 17 2026 at 10:48):

Consider following scenarios

Should parameter constraints become part of WIT language ?

cc @Joel Dice @Luke Wagner @Ralph

view this post on Zulip bjorn3 (Feb 17 2026 at 13:29):

Malicious HTTP client is sending unbounded number of HTTP request headers. The HTTP server handler code in the guest is allocating data structures and could OOM if there are too many. Is the guest code responsible to do this level of HTTP validation ? If not, could we say in wasi:http specification that this is guarded by host (server component) and it's part of the contract ? In non-WASM C# HTTP this is achieved by overall max request header length.

Each request is handled in a separate wasm instance, so for as long as you limit the amount of memory each individual wasm instance can consume, you don't risk a DOS beyond that one request. And if a client is sending you garbage, they shouldn't be expecting a non-garbage result anway.

Malicious HTTP server is sending  unbounded number of HTTP response (trailing) headers. Same question as above.

Do you mean when the server is a wasm component? If the wasm component was compromised by the client, that is effectively a self-DOS like above. And if the wasm component was malicious from the start, a zip bomb would probably be more effective at bringing down the client. The client is responsible for handling a malicious response I would say.

Or do you mean a wasm component connecting to a malicious server? That seems equivalent to your first scenario in terms of how to defend against it.

view this post on Zulip Luke Wagner (Feb 17 2026 at 17:29):

Good points @bjorn3. Additionally I'd say that while components are isolated in terms of capabilities, they're not isolated in terms of traps, OOM and excessive resource usage. For this "stronger" (cgroups-y) degree of isolation, there's a general post-1.0 idea of blast zones but, for now, when you want this stronger degree of isolation, you need to use host powers (as most production wasm embeddings are doing today).

That being said, for the specific case of "too-large list/string passed into a component": today this ends up trapping (when cabi_realloc is called for this too-large list/string and memory.grow returns -1, you have no option other than to trap). But one corollary of the lazy lowering ABI that we're thinking about working on next is that cabi_realloc goes away and too-large values can be dropped without copying which means that guest code is able to gracefully handle OOM (via error results, etc) if it wants to instead of trapping.

view this post on Zulip Pavel Šavara (Feb 18 2026 at 10:24):

"stronger" (cgroups-y) degree of isolation

My question is the opposite direction, it's in "defense in depth" sense.
I would prefer to validate the request/response with better granularity and fail gracefully.

OOM and trap are not really scalable and waste resources. It possibly multiplies cost of defense vs cost of attack.

Each request is handled in a separate wasm instance

This solution is about security/configuration on the deployment layer. I as an author of a component, don't want to rely on it. I want to do my part of the responsibility for making the attack surface smaller.

I would like to express something like: 'I do not accept more than 50 request headers' or each header can't be longer than 10KB or refuse request body over 10MB.

This could be pre-flight API that the http handler could implement. Or it could be part of the host contract in some way. Thoughts ?

Or do you mean a wasm component connecting to a malicious server?

Yes I was talking about both client and server, this is client side.
In this scenario, it would be good if I could express what is "too large response" for my use-case.

Perhaps, this is similar to timeout options:
https://github.com/WebAssembly/WASI/issues/813

Should I add my feature request there ?

when cabi_realloc is called for this too-large list today this ends up trapping

Even if the allocation of the call buffer on heap succeeds, the marshaling to guest (C#) data structures will temporarily double the memory. With a possible bad design of the wit-bindgen generated code, this could also lead to stack overflow during marshaling.

lazy lowering ABI

This seems to be optimization on the marshaling layer. I wonder if there would be callbacks, that would allow the application layer to say "10K items in this list is too much" far before it becomes OOM scenario.

It makes sense to talk about such constraints on general component API attack surface, not just HTTP specifically. I guess wasi:sockets has similar challenges.

view this post on Zulip Luke Wagner (Feb 18 2026 at 19:04):

@Pavel Šavara That makes sense

This seems to be optimization on the marshaling layer. I wonder if there would be callbacks, that would allow the application layer to say "10K items in this list is too much" far before it becomes OOM scenario.

The Lazy ABI would allow guest code to decide whether to accept or fail lowering any particular list/string using arbitrary guest code logic (ultimately deciding whether to call some TBD list.lower or list.drop built-in before returning a possible-error result) and so the guest bindgen could do something more nuanced than simply "fail if memory.grow fails" up to and including, to your point, allowing guest code to register callbacks. And since this is all controlled by guest wit-bindgen, it could be configurable and iterated on over time independently of the runtime or WIT interface, which is nice.

In the meantime, I would expect the host implementations of WASI HTTP and CLI to impose reasonable default limits so that this issue didn't arise much practice.


Last updated: Feb 24 2026 at 04:36 UTC