Hi, currently we saw that the heap of the WASM module is grown by compiler emitted function __builtin_wasm_memory_grow()
, and I'm curious what happens after the module calls this. How does the runtime handle this request and allocate more memory?
Also, I want to know where and how the address checking happens for WASM module. When the module try to access an address above the linear memory's current size, how is this detected and gets errored?
Are you asking about Wasmtime or wasm runtimes in general?
In general: __builtin_wasm_memory_grow
is referring to the wasm core memory.grow
instruction
Regarding memory management in Wasmtime, this may be helpful: https://docs.wasmtime.dev/contributing-architecture.html#linear-memory
@Lann Martin I'm curious about how does the compiler/runtime handle this instruction
@Joel Dice Thank you this is helpful! I'm also curious what's stopping WASM to have memory shrink.
I'm thinking if it's possible implement posix style heap semantics like brk and sbrk as host functions exposed to the module, if I modify wasmtime's implementation of linear memory.
A memory.shrink
(or memory.discard
) instruction has been discussed (e.g. https://github.com/WebAssembly/design/issues/1397), I don't know if anyone's actively working on it, though.
I think part of the reason we haven't seen a memory.shrink
is that for many systems using Wasm it is acceptable - or even desirable - to have short-lived instances recreated for each unit of work (e.g. request) rather than having long-running instances that need memory reclamation.
So I saw there are 4 types of RuntimeLinearMemory
,
MmapMemory
StaticMemory
MmapMemoryProxy
SharedMemory
, usually is a wasm module attached to one or more than one of these RuntimeLinearMemory instances?
I saw in wasi-libc, when memory.grow is called, the first argument (memory index) is always 0
, so I suppose the 0 th memory instance is always the main MmapMemory
Also, I might need some help to find the heap bound checking code in wasmtime :joy:
The specifics there are MmapMemory
is what most modules use as it's the default, StaticMemory
is used with the pooling allocator, MmapMemoryProxy
IIRC is used for the embedder API as an opt-in for host-defined memories, and SharedMemory
is used for shared memories (e.g. memory for threads). Each linear memory is only one of these, and most modules have a single linear memory. If a module has multiple linear memories each one could be backed by a unique one though.
I suppose the 0 th memory instance is always the main
MmapMemory
More-or-less: yes. The 0 here is "memory 0" where 0 is the index of memory in the module memory index space. The Rust/C/C++ memory model doesn't support more than one memory most of the time, so typically there's only a single memory at index 0. And yes it's most of the time MmapMemory
as that's the default.
Also, I might need some help to find the heap bound checking code in wasmtime
This is intertwined in a few places. Throughout the host API bounds-checks happen as the Rust view of linear memory is &[u8]
which is bounds-checked by default. In compiled code however there's a few different ways that bounds are represented and it could be compiled a number of different ways.
Put another way there's not a single place for bounds checks in wasmtime, it's spread all over as-needed. Do you have a particular bounds-check in mind or an area you were thinking of focusing on?
Oh yeah thank you for the explanations! I'm really interested in the case, say the c program just call malloc or sbrk to grow the linear memory to X, then the program try to access an address higher than X, how is this error caught.?
Those sorts of errors are caught via segfaults and signal handling. If wasm accesses memory outside of its bounds then that's, by default, guaranteed to be unmapped memory. Cranelift translation for bounds checks happens in this file, but I'll note that this is a tricky part of Wasmtime since we implement a few strategies for bounds checks in compiled code.
By default though there are no bounds checks. We reserve 8G of virtual memory as "unmapped" to start out for all linear memories. Starting 2G into this region is where the linear memory itself resides, often starting around a size of ~1M. Growth happens by mapping pages in. Out-of-bounds is done by the wasm actually does the load/store and it ends up being unmapped memory, triggering a segfault, and then the segfault is translated to an out-of-bounds error
Thanks, I see how this works. I think I don't need to change this bound_ckecking part of cranelift, I just want to know what are the LinearMemory metadata the bound checker looks for to know which region is valid.
As you said, I guess the bound checker know this info by the metadata of the RuntimeLinearMemory instance. So when memory.grow is called, the MmapMemory increase its size, and then the bound checker know the update bound. I'm digging deeper to see if this is true.
That's basically correct yeah, although one thing I can clarify is that we're trying to keep two pieces of data in sync, what's mapped and what the length of the linear memory is. The "source of truth" depends on who's asking. For example if compiled code is asking then the source of truth is what's mapped and what isn't. For embedder/host things the source of truth is the length field. The job of RuntimeLinearMemory
is basically to keep these two in sync
This makes perfect sense, thank you!
https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Memory.20Grow.20and.20Address.20Validation.20in.20WASM/near/445156870 this! for example, for a typical serverless function even GC just slows you down, as you'll recycle the memory at the end of invocation. The things that like to shrink memory are longer running functions -- once you get to changing wasm spec to support an older runtime style you should be thinking if there's a way to lean into the feature, not change it.
doesn't mean you can't! it's possible to shrink, of course. However, if you think about other features of wasm, such as the lack of readonly memory inside the module, you suddenly realize that an "OS-like" long running module is more problematic than running a container. You still might do it!!! When you really need the portability above all, for example.
ymmv
Last updated: Nov 22 2024 at 16:03 UTC