github-actions[bot] commented on issue #3787:
Subscribe to Label Action
cc @peterhuene
<details>
This issue or pull request has been labeled: "wasmtime:api"Thus the following users have been cc'd because of the following labels:
- peterhuene: wasmtime:api
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
tschneidereit commented on issue #3787:
A very conservative answer to the above risk would be a new Module constructor that is something like Module::map_file_direct(File), and then explicitly avoid the direct-mmap for the existing API that just takes a filename -- though I'm ambivalent about that as it would mean perf benefits are hidden in a non-default place that the average user might not find either.
Given that as you say this already is a bad idea, I think the downside you point out here weighs heavy. Is there any kind of actual safety downside to this relative to the current situation? If not, then we should either make the current behavior safer by default, or continue making use of the benefits we get from this approach. (I also think that it's fine to assume continued existence of files like this, with the argument being "don't shared libraries work this way, too" or something similarly handwavy in that direction.)
alexcrichton commented on issue #3787:
For performance numbers I actually naively assumed it wouldn't really matter here. I don't think our microbenchmarks will really show much today though since it's all just hitting the kernel's page cache. I don't know enough about mmap performance from files to know whether this is what we'd continue to see out in the wild though.
What I got though for "instantiate large module" is below. In this benchmark it was purely instantiation so only setting up vma mappings and not actually doing anything with them:
parallel/pooling/rustpython.wasm: with 1 thread time: [5.1166 us 5.1196 us 5.1226 us] parallel/pooling/rustpython.wasm: with 1 thread (loaded from *.cwasm) time: [5.0117 us 5.0143 us 5.0170 us] ```` I then modified the benchmark to write 0, from Rust, to every single page of memory after instantiation succeeded and I got:
parallel/pooling/rustpython.wasm: with 1 thread
time: [4.0414 ms 4.0418 ms 4.0422 ms]parallel/pooling/rustpython.wasm: with 1 thread (loaded from *.cwasm)
time: [4.0682 ms 4.0686 ms 4.0690 ms]Running a few times this seemed consistent, although the "load all the pages" had some variance in time to the point that I think they were roughly equivalent. The full diff for the benchmark was https://gist.github.com/alexcrichton/7e555a405f2815ec69fcac659b5d85de and the first numbers were collected without the changes to the `instantiate` function which write to memory. ~~~
alexcrichton commented on issue #3787:
In terms of safety,
Module::deserialize_file
is alreadyunsafe
because loading arbitrary data is not memory safe, so adding to the list of requirements "this needs to stay as-expected for the entire duration of the lifetime of theModule
doesn't seem so bad to add. I don't think it makes the safety any worse relative to what we have today?
alexcrichton commented on issue #3787:
Hm something I just thought of, I know that on Linux you can't write to an executable that's currently in use in some other process, which is exactly what we want here. I found that
MAP_DENYWRITE
seems like it would do this as an option to mmap but the man page says that it's an ignored flag nowadays due to denial-of-service attacks (I guess it lets you just exclusively lock files for yourself against everyone else's will). Do others know how we could get this behavior, though? That might help the prevent-future-writes problem a bit.
cfallin commented on issue #3787:
In terms of safety,
Module::deserialize_file
is alreadyunsafe
because loading arbitrary data is not memory safe, so adding to the list of requirements "this needs to stay as-expected for the entire duration of the lifetime of theModule
doesn't seem so bad to add. I don't think it makes the safety any worse relative to what we have today?Yup, that sounds like the right path to me; the risk was already there, it's just good to warn about it now :-)
Re: benchmarking -- a thought occurred to me: this is mostly improving module-load performance, so it would be interesting to benchmark module loading! Perhaps a measured inner loop of (module load, instantiate, terminate). In theory, with a module that has a very large heap (ahem SpiderMonkey), we should see load times that are O(heap size) without this PR, and O(1)-ish with this PR.
The above isn't really necessary to get this in I think -- with no negative impacts to instantiation or pagefault cost during runtime, and with the RSS benefit, I'm already convinced it's a good thing; but it would be a good way to demonstrate the benefit if you want to do that.
cfallin commented on issue #3787:
Hm something I just thought of, I know that on Linux you can't write to an executable that's currently in use in some other process, which is exactly what we want here. I found that
MAP_DENYWRITE
seems like it would do this as an option to mmap but the man page says that it's an ignored flag nowadays due to denial-of-service attacks (I guess it lets you just exclusively lock files for yourself against everyone else's will). Do others know how we could get this behavior, though? That might help the prevent-future-writes problem a bit.This sent me down a very interesting rabbithole -- apparently, until recently,
MAP_DENYWRITE
did work. Indeed if I strace/bin/ls
on my system I see the flag included in the mmaps of system libraries. A quick test shows that while one binary executes on my system, I get ETXTBSY when trying to open the fileO_RDWR
from another process. (Superficially one can "write" it by copying over it or editing with a text editor, but this really just replaces the directory entry, and doesn't open the existing file for writing.)But it seems that MAP_DENYWRITE is going away -- was removed from the kernel last August -- and so this same issue, if it is one, will exist for system binaries/libraries too.
So just the warning and the fact that the API is
unsafe
seems enough to me at least!
alexcrichton commented on issue #3787:
a thought occurred to me: this is mostly improving module-load performance, so it would be interesting to benchmark module loading
This is a great idea and something I forgot! I was inspired to write this up in a benchmark from this -- https://github.com/bytecodealliance/wasmtime/pull/3790. My first attempt at benchmarking this showed no improvement from this PR but I was alarmed at the relatively slow deserialize time, which spawned https://github.com/bytecodealliance/wasmtime/pull/3789. Upon further thought though I remembered that memfd creation is lazy and consequently not affected by what I was benchmarking (purely deserialization, not deserialization plus instantiation).
I then updated the existing code to more eagerly construct the memfd, and that regressed relative to this PR by ~10x, with most of the time spent in
write
andmemcpy
(movment from data segments to the image). So it looks like this can definitely help improve first-instantiate time and we can probably make memfd creation un-lazy after this.
Anyway I will get to the rest of the review in a moment now...
cfallin commented on issue #3787:
I then updated the existing code to more eagerly construct the memfd, and that regressed relative to this PR by ~10x, with most of the time spent in
write
andmemcpy
(movment from data segments to the image). So it looks like this can definitely help improve first-instantiate time and we can probably make memfd creation un-lazy after this.This is great! And yeah, I agree, given that the memfd state is now cheap to create on load (a wrapper around the
Arc<File>
with some metadata basically) there's no reason for it to be lazy anymore.
Last updated: Nov 22 2024 at 16:03 UTC