Has anyone thought about using custom allocators (both on the embedder and on the instance side) to make it possible for instances to release memory?
Background: the grow-only semantics are turning out to be quite the problem for persistent instances in my runtime. A lot of code will have short bursts of larger memory usage and the instance ends up with way more memory than it actually needs. New code can be written to regularly re-create instances when they get too large, but that's both awkward and doesn't work for general purpose code that's not written with that constraint in mind.
So in lack of an official solution my idea was to essentially emulate virtual linear memory for the guest with memory mapping.
A custom allocator on the host provides two functions: malloc
and free
.
malloc
initially always creates a new memory mapping at the end of current linear memory (think MAP_ANONYMOUS | MAP_FIXED_NOREPLACE
on Linux) to grow the memory. This should probably always do large chunks at once, at least 20+ Wasm pages.
free
can now be used to release the memory, which will simply unmap the mapping, returning the memory to the host or to a cross-instance allocator. The relevant memory range needs to be reserved/protected to prevent access by the instance and a signal handler to recover from illegal access.
There is now a hole in linear memory which can be re-used for future allocations.
===
On the guest side this needs a custom allocator that expects relatively large chunks of memory for each malloc
and then uses it internally. It shouldn't be too hard to customize an existing allocator to do this, which can then be pretty easily used to patch up Rust/C/C++ code unobtrusively.
I think this should be implementable in Wasmtime as is with a MemoryCreator
. The only open question for me is probably how to handle illegal access without blowing up the thread.
Has anyone thought about or actually implemented something like this?
Or sees any concrete problems with the idea?
(addendum: ignore the or to a cross-instance allocator
part above, which of course doesn't make sense with mapped memory)
I also don't know if creating tens of thousands or millions of mappings is a problem for the OS.
On Linux the default limit is quite low with 65530. (sysctl vm.max_map_count
)
Quite possible that this is rather unusual and could cause problems.
If I understand you right I believe this is something that Wasmtime does indeed not implement, but mainly because wasm itself has no facility for this. Linear memories in wasm can only be grown and never shrunk (or have holes in them). Changing that would be a CG-level proposal I believe.
It may also be worth pointing out that linear memories and instances live as long as the Store that contains them, and that's an architectural decision of Wasmtime itself and not necessarily inherent to wasm (although I dont' think your question is directly related to this)
Yeah this is something we are very unlikely to implement in a non-standard way.
However, you may be interested in the memory-control
Wasm proposal: https://github.com/WebAssembly/memory-control/blob/master/proposals/memory-control/Overview.md
(specifically the memory.discard
functionality)
I didn't mean to suggest this was something that wasmtime should implement, exactly because it's not standard and requires a custom allocator inside the instance.
It's basically a hack to let linear memory grow as much as it wants to while still releasing the "holes" in it back to the OS , and hence indirectly back to the process running wasmtime because another mapping can reuse the physical memory.
I just wanted to bring up the idea and see if someone sees a reason this really wouldn't work.
Like I said, this should probably already be doable without any changes to wasmtime via a a custom MemoryCreator
.
@fitzgen (he/him)
Ah, thanks for that link. I now also stumbled on https://github.com/WebAssembly/design/issues/1397 and https://github.com/WebAssembly/design/issues/1397.
Some of the ideas there around mapping mentioned in the discussion would actually allow my idea in a non-hacky way.
I think if you were to implement a custom MemoryCreator
this would probably work, but if you create a "hole" which segfaults instead of being lazily initialized to zero it may not be safe from a Rust perspective depending on what you're doing because we hand out &[u8]
safely for all of linear memory.
Last updated: Dec 23 2024 at 14:03 UTC