github-actions[bot] commented on issue #3820:
Subscribe to Label Action
cc @peterhuene
<details>
This issue or pull request has been labeled: "wasmtime:api", "wasmtime:config"Thus the following users have been cc'd because of the following labels:
- peterhuene: wasmtime:api
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
github-actions[bot] commented on issue #3820:
Label Messager: wasmtime:config
It looks like you are changing Wasmtime's configuration options. Make sure to
complete this check list:
[ ] If you added a new
Config
method, you wrote extensive documentation for
it.<details>
Our documentation should be of the following form:
```text
Short, simple summary sentence.More details. These details can be multiple paragraphs. There should be
information about not just the method, but its parameters and results as
well.Is this method fallible? If so, when can it return an error?
Can this method panic? If so, when does it panic?
Example
Optional example here.
```</details>
[ ] If you added a new
Config
method, or modified an existing one, you
ensured that this configuration is exercised by the fuzz targets.<details>
For example, if you expose a new strategy for allocating the next instance
slot inside the pooling allocator, you should ensure that at least one of our
fuzz targets exercises that new strategy.Often, all that is required of you is to ensure that there is a knob for this
configuration option in [wasmtime_fuzzing::Config
][fuzzing-config] (or one
of its nestedstruct
s).Rarely, this may require authoring a new fuzz target to specifically test this
configuration. See [our docs on fuzzing][fuzzing-docs] for more details.</details>
[ ] If you are enabling a configuration option by default, make sure that it
has been fuzzed for at least two weeks before turning it on by default.[fuzzing-config]: https://github.com/bytecodealliance/wasmtime/blob/ca0e8d0a1d8cefc0496dba2f77a670571d8fdcab/crates/fuzzing/src/generators.rs#L182-L194
[fuzzing-docs]: https://docs.wasmtime.dev/contributing-fuzzing.html
<details>
To modify this label's message, edit the <code>.github/label-messager/wasmtime-config.md</code> file.
To add new label messages or remove existing label messages, edit the
<code>.github/label-messager.json</code> configuration file.</details>
tschneidereit commented on issue #3820:
Even if we mlock the original Module into memory we might still run the risk that the kernel page cache doesn't have entries for the module's memory image.
@alexcrichton by this, do you mean that the very first time a module is instantiated / a particular mapped page is accessed in an instance, it needs to be mapped in from disk?
If so, that's the intended behavior for this patch: basically, the idea is to not have to read all pages of all modules at startup, with the associated increases in startup time and RSS. Instead, we'd still map pages in lazily, but once we have them, we keep them around for sure.
Eagerly mapping in pages is a separate thing that we might also want to introduce support for, but I think it's best to have these be separate.
alexcrichton commented on issue #3820:
Ah I see. I did indeed mean during instantiation and while I don't think we want to eagerly pay the cost of initializing memory (as that defeats the original purpose of lazy init) there could be a theoretical case made for the first page fault on each page should be not as slow as going to disk. That being said I don't think we can guarantee that because we can't guarantee membership in the kernel's page cache with
mlock
.If we still want to lazily page things in, though, then I think this may want to use
mlock2
withMLOCK_ONFAULT
because currently it usesmlock
which I believe guarantees that everything is resident aftermlock
returns so it would have rss and startup time implications.
sunfishcode commented on issue #3820:
The macOS failure here is because
mlock_with
/mlock2
is only available on Linux.
bjorn3 commented on issue #3820:
What would exactly be the benefit of this change? Once a module is instantiated and run, all necessary pages should already be in memory, right? The only thing that can kick them back out again is a low memory condition, in which case
mlock
would require the kernel to evict other pages even if they are used more frequently.
tschneidereit commented on issue #3820:
@bjorn3 you're right that in most contexts this doesn't make much sense. It can be useful however in settings where the goal is to have close to full memory usage at all times, but with most data being evictable because it's just there for caching purposes. In such a setting, there's often more information available on the application level than on the kernel level about which data is more important to retain, and this is an attempt to make use of that information in a better way.
alexcrichton commented on issue #3820:
As a heads up @pchickey I just pushed to this branch. I rebased it and tweaked things a bit, but this is still not in a landable state I think.
Last updated: Nov 22 2024 at 16:03 UTC