Hello! I'm looking at running wasmtime within seastar. A big concern for me as an embedder is controlling CPU + Memory usage of Wasm functions that run. Async support has everything I need for CPU throttling with gas meter yielding, but memory usage is tricker. Seastar allocates all memory at application start and divides it up per core, then each core has it's own buddy allocator. As the processes runs this memory space can become quite fragmented, and there are situations where there is not a contiguous chunk of of couple MB available (even if there are GBs of memory available on the system). How difficult would be to add support for disjoint sets of pages to map to a single Wasm memory? I'm thinking you'd need a page table like component.
My guess would be that adding this would be a significant project and likely have a large perf impact. If you're working on a system with virtual memory though you may be able to achieve the same results using the limiter built-in to stores
For example if wasm wants 1M of memory you could allocate that from the system allocator but not use it. The rss would be the same and you'd just have resident memory elsewhere
As a proxy for performance impact, the "softmmu" page table emulation in qemu might give some idea of the impact this would have. This paper claims qemu's softmmu is 38% of emulation time, even with TLB implemented in software. I'd second the suggestion to find a way to use virtual memory if at all possible!
I'm not sure of the details but does Wasmtime's pooling allocator help here?
Thanks for the suggestions (and the paper link!) - very helpful. There are ways around to use the system alloc (and virtual memory) and seems like that would be a better path forward.
Preallocating memory on startup and pooling it is an option, but a very limiting one for a various number of reasons (which I'm happy to expand on if you're interested).
Tyler Rockwood has marked this topic as resolved.
Last updated: Dec 23 2024 at 14:03 UTC