Do we have a policy for whether we handle OOMs?
I remember discussing this in one of our biweekly meetings and ,IIRC, the discussion wasn't super conclusive. I think the closest we got was "we should make an attempt to avoid the common OOM cases, but not bend over backwards". Does that still sound right?
Context: the fuzzers have found out that they can trigger OOMs by requesting very large tables. Should we polyfill Vec::try_with_capacity
using the std::alloc
API here? And also handle this in table.grow
? How deep do we want to chase this?
hmm... it looks like any call to malloc
that crosses the fuzzer threshold triggers an ASan crash, so even if we write the code to recover from malloc failure, it doesn't fix this fuzz bug :-/
Yeah I think even if we gracefully handle oom here it'll show up as a bug, so in this case I think we will need to do an ahead-of-time calculation to limit the size of a table, I thought wasmparser might already have this limit but looks like not for table limits
Do we have any way to explicitly size limits for the heap and tables, ideally exposed in the embedding API? The fuzzing setup could use those and shut instances down that exceed them, I guess? (Though I also guess it probably should also try to avoid doing this too much.)
These kinds of limits seem generally useful to embedders, so if there's something lacking there, it might be good to resolve that.
We have some limits in cranelift and wasmparser, but they aren't user configurable, last I looked.
Last updated: Dec 23 2024 at 13:07 UTC