cfallin commented on issue #5004:
@pepyakin , thanks for this change. However, I'm not sure that we want to take it as-is; or at least, I'd like to see more data.
The Sightglass runs you provide are all over the place -- in some cases this is better by 5-10%, in other cases
main
is better by 5-10%. There doesn't appear to be a clear trend to me.What's more concerning, the
instantiation
times have wild swings as well, but this shouldn't have been affected at all by this change. This makes me concerned about the reliability of the measurement setup and confidence-interval computation.Would you be willing to take a few of the larger benchmarks (
spidermonkey
andbz2
, say) and run them by precompiling to.cwasm
s then measuring time with your tool of choice (I likehyperfine
) for awasmtime run ...
command? I'm curious how much variance this sees and what trends we'll get.I share @bjorn3's concern about code size here as well. Aligning every loop to a 64-byte granularity is likely to bloat some cases nontrivially. Could you report the effect on code-size (the
.text
segment of the.cwasm
s) for benchmarks as well?
pepyakin commented on issue #5004:
That's a good catch. I must admit that I did not notice that it was about instantiation! Probably something was running on my test machine and ruined the measurement. I also agree with the points made by @bjorn3. In retrospect, that seems obvious :face_palm:
I had an impression that this PR would be a quick one, but it seems I was wrong = ) I will try to further it, but that won't be my primary focus, so it will likely take some time[^1].
[^1]: Should anyone feel the acute need/or just want to take over for other reasons, feel free to take it on.
Last updated: Dec 23 2024 at 12:05 UTC