alexcrichton opened issue #4816:
Translation of type A to type B in the component model between two core wasm modules can from one point of view be seen as inlining. The translation of type A to B from memory C to memory D can be viewed as its own function which recursively calls other translation functions depending on the structure of A (e.g. records would call subroutines for each field, variants would call the appropriate function depending on the case, etc).
This however is now how adapter trampoline compilation works today. Instead adapters are forcibly "inlined" whenever the structure is stack-based and only once values start hitting memory do individual functions end up getting outlined. Originally everything was 100% inlined but this was changed in https://github.com/bytecodealliance/wasmtime/pull/4640 to fix fuzz-bugs that appeared where the generated translation function was massive because the cumulative "size" of a type was quite large. The heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 was that translations of a type which resides in memory always get outlined to a separate function (except for primitives).
This heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 is not sufficient for preventing exponentially sized modules and it's also somewhat suboptimal by having lots of function calls once things are based in memory (e.g. lists). An example of this is a type of the form:
(type $v1 (union u64 u64 u64 (; ... N times ... ;))) (type $v2 (union $v1 $v1 $v1 (; ... N times ... ;))) ;; ... 14 more times ... (type $v16 (union $v15 $v15 $v15 (; ... N times ... ;)))
Here the "size" of this type is $16^N$ which means that the size of the adapter generated is $16^N$, clearly quite large! This is because everything here is always transferred via an "immediate" rather than through memory, meaning it doesn't hit the memory heuristic previously implemented.
I'm filing this issue to track the implementation to fix this. We haven't seen a fuzz-bug which found this case yet but I'm presuming it's inevitable at this point. The fix I would like to implement is to keep track of the size of a type (cumulatively summed from all components) and additionally have some sort of concept for the amount of fuel when generating an adapter function. The fuel would be a form of size-heuristic to avoid the function becoming too large where translating a type would deduct the type size from the fuel. If the type size can't be deducted then an out-of-line function would be called to perform the translation, limiting the size of any one function. For the translation of the above
$v16
type I'd hope that this would result in 16 ish functions in the adapter module all of size $N$, much smaller than $16^N$
alexcrichton labeled issue #4816:
Translation of type A to type B in the component model between two core wasm modules can from one point of view be seen as inlining. The translation of type A to B from memory C to memory D can be viewed as its own function which recursively calls other translation functions depending on the structure of A (e.g. records would call subroutines for each field, variants would call the appropriate function depending on the case, etc).
This however is now how adapter trampoline compilation works today. Instead adapters are forcibly "inlined" whenever the structure is stack-based and only once values start hitting memory do individual functions end up getting outlined. Originally everything was 100% inlined but this was changed in https://github.com/bytecodealliance/wasmtime/pull/4640 to fix fuzz-bugs that appeared where the generated translation function was massive because the cumulative "size" of a type was quite large. The heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 was that translations of a type which resides in memory always get outlined to a separate function (except for primitives).
This heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 is not sufficient for preventing exponentially sized modules and it's also somewhat suboptimal by having lots of function calls once things are based in memory (e.g. lists). An example of this is a type of the form:
(type $v1 (union u64 u64 u64 (; ... N times ... ;))) (type $v2 (union $v1 $v1 $v1 (; ... N times ... ;))) ;; ... 14 more times ... (type $v16 (union $v15 $v15 $v15 (; ... N times ... ;)))
Here the "size" of this type is $16^N$ which means that the size of the adapter generated is $16^N$, clearly quite large! This is because everything here is always transferred via an "immediate" rather than through memory, meaning it doesn't hit the memory heuristic previously implemented.
I'm filing this issue to track the implementation to fix this. We haven't seen a fuzz-bug which found this case yet but I'm presuming it's inevitable at this point. The fix I would like to implement is to keep track of the size of a type (cumulatively summed from all components) and additionally have some sort of concept for the amount of fuel when generating an adapter function. The fuel would be a form of size-heuristic to avoid the function becoming too large where translating a type would deduct the type size from the fuel. If the type size can't be deducted then an out-of-line function would be called to perform the translation, limiting the size of any one function. For the translation of the above
$v16
type I'd hope that this would result in 16 ish functions in the adapter module all of size $N$, much smaller than $16^N$
alexcrichton labeled issue #4816:
Translation of type A to type B in the component model between two core wasm modules can from one point of view be seen as inlining. The translation of type A to B from memory C to memory D can be viewed as its own function which recursively calls other translation functions depending on the structure of A (e.g. records would call subroutines for each field, variants would call the appropriate function depending on the case, etc).
This however is now how adapter trampoline compilation works today. Instead adapters are forcibly "inlined" whenever the structure is stack-based and only once values start hitting memory do individual functions end up getting outlined. Originally everything was 100% inlined but this was changed in https://github.com/bytecodealliance/wasmtime/pull/4640 to fix fuzz-bugs that appeared where the generated translation function was massive because the cumulative "size" of a type was quite large. The heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 was that translations of a type which resides in memory always get outlined to a separate function (except for primitives).
This heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 is not sufficient for preventing exponentially sized modules and it's also somewhat suboptimal by having lots of function calls once things are based in memory (e.g. lists). An example of this is a type of the form:
(type $v1 (union u64 u64 u64 (; ... N times ... ;))) (type $v2 (union $v1 $v1 $v1 (; ... N times ... ;))) ;; ... 14 more times ... (type $v16 (union $v15 $v15 $v15 (; ... N times ... ;)))
Here the "size" of this type is $16^N$ which means that the size of the adapter generated is $16^N$, clearly quite large! This is because everything here is always transferred via an "immediate" rather than through memory, meaning it doesn't hit the memory heuristic previously implemented.
I'm filing this issue to track the implementation to fix this. We haven't seen a fuzz-bug which found this case yet but I'm presuming it's inevitable at this point. The fix I would like to implement is to keep track of the size of a type (cumulatively summed from all components) and additionally have some sort of concept for the amount of fuel when generating an adapter function. The fuel would be a form of size-heuristic to avoid the function becoming too large where translating a type would deduct the type size from the fuel. If the type size can't be deducted then an out-of-line function would be called to perform the translation, limiting the size of any one function. For the translation of the above
$v16
type I'd hope that this would result in 16 ish functions in the adapter module all of size $N$, much smaller than $16^N$
alexcrichton closed issue #4816:
Translation of type A to type B in the component model between two core wasm modules can from one point of view be seen as inlining. The translation of type A to B from memory C to memory D can be viewed as its own function which recursively calls other translation functions depending on the structure of A (e.g. records would call subroutines for each field, variants would call the appropriate function depending on the case, etc).
This however is now how adapter trampoline compilation works today. Instead adapters are forcibly "inlined" whenever the structure is stack-based and only once values start hitting memory do individual functions end up getting outlined. Originally everything was 100% inlined but this was changed in https://github.com/bytecodealliance/wasmtime/pull/4640 to fix fuzz-bugs that appeared where the generated translation function was massive because the cumulative "size" of a type was quite large. The heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 was that translations of a type which resides in memory always get outlined to a separate function (except for primitives).
This heuristic implemented in https://github.com/bytecodealliance/wasmtime/pull/4640 is not sufficient for preventing exponentially sized modules and it's also somewhat suboptimal by having lots of function calls once things are based in memory (e.g. lists). An example of this is a type of the form:
(type $v1 (union u64 u64 u64 (; ... N times ... ;))) (type $v2 (union $v1 $v1 $v1 (; ... N times ... ;))) ;; ... 14 more times ... (type $v16 (union $v15 $v15 $v15 (; ... N times ... ;)))
Here the "size" of this type is $16^N$ which means that the size of the adapter generated is $16^N$, clearly quite large! This is because everything here is always transferred via an "immediate" rather than through memory, meaning it doesn't hit the memory heuristic previously implemented.
I'm filing this issue to track the implementation to fix this. We haven't seen a fuzz-bug which found this case yet but I'm presuming it's inevitable at this point. The fix I would like to implement is to keep track of the size of a type (cumulatively summed from all components) and additionally have some sort of concept for the amount of fuel when generating an adapter function. The fuel would be a form of size-heuristic to avoid the function becoming too large where translating a type would deduct the type size from the fuel. If the type size can't be deducted then an out-of-line function would be called to perform the translation, limiting the size of any one function. For the translation of the above
$v16
type I'd hope that this would result in 16 ish functions in the adapter module all of size $N$, much smaller than $16^N$
Last updated: Dec 23 2024 at 13:07 UTC