abrown requested cfallin for a review on PR #10538.
abrown requested wasmtime-compiler-reviewers for a review on PR #10538.
abrown opened PR #10538 from abrown:fix-10529 to bytecodealliance:main:
Along the lines of #10389 and #10478, this change fixes the case where an XMM register is pretty-printed during logging, which may happen before register allocation has provided a true HW register.
Fixes #10529.
<!--
Please make sure you include the following information:
If this work has been discussed elsewhere, please include a link to that
conversation. If it was discussed in an issue, just mention "issue #...".Explain why this change is needed. If the details are in an issue already,
this can be brief.Our development process is documented in the Wasmtime book:
https://docs.wasmtime.dev/contributing-development-process.htmlPlease ensure all communication follows the code of conduct:
https://github.com/bytecodealliance/wasmtime/blob/main/CODE_OF_CONDUCT.md
-->
alexcrichton commented on PR #10538:
Could the test added in https://github.com/bytecodealliance/wasmtime/pull/10478 be expanded to cover this too?
abrown commented on PR #10538:
Could the test added in #10478 be expanded to cover this too?
Yeah, we need a few more SSE lowerings plumbed through to really trigger this; I can't seem to find one that actually goes all the way from WAT to the assembler that will use an XMM register and trigger this.
cfallin submitted PR review:
Thanks!
An idle thought about testing: would it be possible to run the filetests in CI with
RUST_LOG=trace? That would certainly give us coverage of almost all or all lowerings (we expect them all to be tested); only uncertainty is what it would do to test time, but probably OK?
abrown commented on PR #10538:
Thanks!
An idle thought about testing: would it be possible to run the filetests in CI with
RUST_LOG=trace? That would certainly give us coverage of almost all or all lowerings (we expect them all to be tested); only uncertainty is what it would do to test time, but probably OK?I think that makes sense. Here's a slowdown comparison:
$ cd cranelift $ RUST_LOG=cranelift_codegen=trace /usr/bin/time cargo test ... 24.65user 0.57system 0:04.66elapsed 540%CPU$ cd cranelift $ /usr/bin/time cargo test ... 17.79user 0.12system 0:04.18elapsed 427%CPUSlower, but perhaps worth it. Where does this fit in
main.yml, though? A whole newtest_*job?
alexcrichton commented on PR #10538:
This won't fit easily into
main.ymlunfortunately. Thecranelift-toolscrate is tested in a "soup" with a bunch of other crates which are partitioned automatically. As global state (RUST_LOG) this also isn't workable as a sibling#[test]. What I might recommend is a siblingcranelift/tests/filetests_with_trace.rswhich is mostly the same but configuresenv_loggeror similar. That keeps it isolated into a single process at least.
abrown updated PR #10538.
abrown commented on PR #10538:
This seems a bit like overkill but, hey, this virtual-vs-HW register panic keeps popping up. ea36958 adds a test to run all the filetests with logging enabled, which I confirmed catches the bug. When everything goes well, this is what I see time-wise:
$ /usr/bin/time cargo test --package cranelift-tools --test filetests ... 14.87user 0.13system 0:01.15elapsed 1304%CPU$ /usr/bin/time cargo test --package cranelift-tools --test logged-filetests ... 20.41user 0.29system 0:01.42elapsed 1451%CPUOf course, if anything goes poorly, then one can expect an afternoon-long wait while the test runner prints every log it has captured. I guess that is incentive to keep the pretty-printing working!
abrown merged PR #10538.
Last updated: Jan 09 2026 at 13:15 UTC