jameysharp commented on issue #6111:
This is neat! It's clearly handy to be able to regenerate test expectations like this.
I don't know for sure that precise output tests are a good idea for optimizations though. Writing assertions about only the specific aspects we're trying to test makes the tests more resilient to unrelated changes in the optimizer.
I'm curious if @cfallin or @fitzgen have opinions on this.
Kmeakin commented on issue #6111:
I don't know for sure that precise output tests are a good idea for optimizations though. Writing assertions about only the specific aspects we're trying to test makes the tests more resilient to unrelated changes in the optimizer.
Could we do a script to automatically insert the correct
filecheck
commands? LLVM has something similar: https://github.com/llvm/llvm-project/blob/main/llvm/utils/update_any_test_checks.py
cfallin commented on issue #6111:
I don't know for sure that precise output tests are a good idea for optimizations though. Writing assertions about only the specific aspects we're trying to test makes the tests more resilient to unrelated changes in the optimizer.
I'm curious if @cfallin or @fitzgen have opinions on this.
Yes, I think we want to retain this property: many of the tests are checking just "this one value becomes X" rather than freezing the entire output. There is still some unnecessary capture of implementation details (e.g. the value number in the output exposes how many rewrites we went through) but less coupling is better here.
It would be nice to try to find ways to automate updates though!
fitzgen commented on issue #6111:
Yeah for tests that are asserting one particular peephole applied, it is nice to not have the full precise output. There are certainly other tests that are written in a less targeted way, and could be migrated to precise output tests on a case-by-case basis. Because of this, it would be nice to support, even if we don't blanket enable it for every
test optimize
test.
Last updated: Nov 22 2024 at 16:03 UTC