github-actions[bot] commented on issue #4813:
Subscribe to Label Action
cc @peterhuene
<details>
This issue or pull request has been labeled: "wasmtime:api", "wasmtime:config"Thus the following users have been cc'd because of the following labels:
- peterhuene: wasmtime:api
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
github-actions[bot] commented on issue #4813:
Label Messager: wasmtime:config
It looks like you are changing Wasmtime's configuration options. Make sure to
complete this check list:
[ ] If you added a new
Config
method, you wrote extensive documentation for
it.<details>
Our documentation should be of the following form:
```text
Short, simple summary sentence.More details. These details can be multiple paragraphs. There should be
information about not just the method, but its parameters and results as
well.Is this method fallible? If so, when can it return an error?
Can this method panic? If so, when does it panic?
Example
Optional example here.
```</details>
[ ] If you added a new
Config
method, or modified an existing one, you
ensured that this configuration is exercised by the fuzz targets.<details>
For example, if you expose a new strategy for allocating the next instance
slot inside the pooling allocator, you should ensure that at least one of our
fuzz targets exercises that new strategy.Often, all that is required of you is to ensure that there is a knob for this
configuration option in [wasmtime_fuzzing::Config
][fuzzing-config] (or one
of its nestedstruct
s).Rarely, this may require authoring a new fuzz target to specifically test this
configuration. See [our docs on fuzzing][fuzzing-docs] for more details.</details>
[ ] If you are enabling a configuration option by default, make sure that it
has been fuzzed for at least two weeks before turning it on by default.[fuzzing-config]: https://github.com/bytecodealliance/wasmtime/blob/ca0e8d0a1d8cefc0496dba2f77a670571d8fdcab/crates/fuzzing/src/generators.rs#L182-L194
[fuzzing-docs]: https://docs.wasmtime.dev/contributing-fuzzing.html
<details>
To modify this label's message, edit the <code>.github/label-messager/wasmtime-config.md</code> file.
To add new label messages or remove existing label messages, edit the
<code>.github/label-messager.json</code> configuration file.</details>
cfallin commented on issue #4813:
Actually though as I think more about this, I feel that there's a case to be made for removing this option entirely and simply not resetting stack pages back to zero. I don't think that the defense-in-depth argument holds a ton of water here because we make no attempt to reset the stack for non-async calls. If an alternate stack was always used then I think there's a case to be made for having this as an option since it would be turning off a uniformly off-by-default behavior, but otherwise as-is this option is only applicable with async support and the pooling instance allocator, both of which are already niche.
@cfallin I know in the past you've argued that this behavior should remain, so I'm curious if you still feel that way or if I could perhaps convince you of otherwise.
I think it's pretty important to keep this behavior, for a few reasons:
Depending on regalloc and stack frame management code to be correct in order to not leak stale data from other instances feels way too risky to me. Yes, any regalloc bug could create a read gadget that reads anything in the address space... but having sensitive data from another instance immediately on our stack, in stack slots that haven't yet been overwritten, increases risk by a very significant amount. It just seems like too much to me to remove that layer of safety (zero out the sensitive data under the nose of the new instance) in cases where we are otherwise very careful about isolation.
Because of that, I want the option to run with zeroed stacks for new instances. However, I don't care too much if it's the default or not; there are a number of config options that folks running wasmtime in high-performance, high-security settings will want to consider anyway, and this is just one more such setting. But the option shouldn't go away, IMHO.
And actually I agree that the inconsistency between async and sync cases should be addressed, but in the opposite direction: it would be great if someday we could have the option to execute sync Wasm calls on a fresh zeroed stack as well! But almost certainly not by default because that's more of a significant penalty (async needs some separate stack whereas sync can run on the host stack otherwise).
alexcrichton commented on issue #4813:
That sounds reasonable to me. @Duslia would you be ok implementing this new option, but flipping the defaults? Instead the option would enable the zeroing behavior that is currently the default today, and by default Wasmtime wouldn't zero stacks async stacks.
alexcrichton commented on issue #4813:
I pushed up some tweaks to the wording here but otherwise this looks good to me, thanks!
Last updated: Dec 23 2024 at 12:05 UTC