cfallin opened issue #3751:
In conversation recently with several folks, it became apparent that the way in which ISLE rules' precedence is defined can be slightly at odds with intuition, or at least with some common idioms.
There have been cases (e.g. here and several others) where the usual pattern-matching idiom from functional programming (and Rust
match
expressions) of(rule (lower (my_inst P1 P2)) ...) (rule (lower (my_inst P3 _)) ...) (rule (lower (my_inst _ P4)) ...) (rule (lower (my_inst _ _)) ...)
occurs, and the later, more general cases actually must be shadowed by the earlier, more specific cases.
In general, the ISLE design intended to (and the DSL compiler generally does) prioritize specific cases over general cases, so this is fine on the surface, but two problems arise.
First, as we look forward to verification, it would be better if every rule's meaning were independent of other rules. In other words, a more general rule should always be correct where it could be applied. A more specific rule might apply first, and give a better lowering; but any applicable rule (one with a LHS whose pattern matches) should give a correct answer. In a formal sense, we could say that execution of the lowering has multiple possible results (depending on rule application choice), and we require that they all be correct.
Second, while the ordering rules (more specific first) are generally intuitive, they are still DSL compiler behavior that needs to be internalized and incorporated into a mental model. This is cognitive load that is possibly made worse by the fact that Rust pattern-matching semantics do not have this potential reordering by special heuristic.
There are good reasons for the reordering. I had originally proposed a sort of what-you-see-is-what-you-get "the rules apply in order" principle, as if the whole body of rules were one giant
match
. This is extremely simple; there is no misunderstanding what the DSL compiler will do; but as @fitzgen and others pointed out, it's not really compatible with a world where we have tools that generate lots of special-case lowerings and "append" them to the set of rules.In other words, we really do want some top-level sorting that puts more specific lowerings first, and we'd prefer that this not have to happen by literally sorting our source code.
It seems that the root of the problem is that there are two different idioms or use-cases for having multiple rules with overlapping left-hand-side patterns:
- Refinements. A special case can be lowered more efficiently in a certain way, but this refinement isn't necessary for correctness. This is the "tool discovers lots of special lowerings" use-case.
- Conditional case breakdowns for a local pattern. In the example linked above, the idiom was really "for -1, do this; for not -1, do that." This is the sort of thing one would use a
match
for in Rust.An insight I think might be useful is that the second case happens at the local level, while the first case happens at the global level. So I'd like to propose a mechanism that might get us the benefits of both: locally-ordered groups of rules.
Specifically I'm thinking of some sort of special form that groups a collection of rules into one unit and implies a strong precedence between them. In other words, if the first rule's LHS applies, then the first rule must be chosen over the other rules. Something like (forgiving some arbitrary syntax invention; we can bikeshed later):
(group (rule (my_helper (Enum.A) 1)) (rule (my_helper (Enum.B) 2)) (rule (my_helper _ 3))) ;; semantically, returning 3 is not a valid execution if either of above matches
Then any analysis that is attempting to understand and verify rules' meanings in isolation has an explicit directive that these rules must be considered together, and rule
i
must be understood as having a pattern that is "not 0..i-1, AND (original pattern)".We can work this into the trie-construction in the ISLE compiler backend by doing something similar to the priority mechanism. I haven't worked out all of the details here (else I would show a prototype alongside this issue!) but I'm fairly sure it can be done; the only question is how much of the case-merging this will disrupt, but if it is mostly used for helpers or for small groups of rules then this should be reasonable I think.
Thoughts? (cc @fitzgen @abrown @uweigand)
fitzgen commented on issue #3751:
As we previously discussed, this makes sense to me.
fitzgen labeled issue #3751:
In conversation recently with several folks, it became apparent that the way in which ISLE rules' precedence is defined can be slightly at odds with intuition, or at least with some common idioms.
There have been cases (e.g. here and several others) where the usual pattern-matching idiom from functional programming (and Rust
match
expressions) of(rule (lower (my_inst P1 P2)) ...) (rule (lower (my_inst P3 _)) ...) (rule (lower (my_inst _ P4)) ...) (rule (lower (my_inst _ _)) ...)
occurs, and the later, more general cases actually must be shadowed by the earlier, more specific cases.
In general, the ISLE design intended to (and the DSL compiler generally does) prioritize specific cases over general cases, so this is fine on the surface, but two problems arise.
First, as we look forward to verification, it would be better if every rule's meaning were independent of other rules. In other words, a more general rule should always be correct where it could be applied. A more specific rule might apply first, and give a better lowering; but any applicable rule (one with a LHS whose pattern matches) should give a correct answer. In a formal sense, we could say that execution of the lowering has multiple possible results (depending on rule application choice), and we require that they all be correct.
Second, while the ordering rules (more specific first) are generally intuitive, they are still DSL compiler behavior that needs to be internalized and incorporated into a mental model. This is cognitive load that is possibly made worse by the fact that Rust pattern-matching semantics do not have this potential reordering by special heuristic.
There are good reasons for the reordering. I had originally proposed a sort of what-you-see-is-what-you-get "the rules apply in order" principle, as if the whole body of rules were one giant
match
. This is extremely simple; there is no misunderstanding what the DSL compiler will do; but as @fitzgen and others pointed out, it's not really compatible with a world where we have tools that generate lots of special-case lowerings and "append" them to the set of rules.In other words, we really do want some top-level sorting that puts more specific lowerings first, and we'd prefer that this not have to happen by literally sorting our source code.
It seems that the root of the problem is that there are two different idioms or use-cases for having multiple rules with overlapping left-hand-side patterns:
- Refinements. A special case can be lowered more efficiently in a certain way, but this refinement isn't necessary for correctness. This is the "tool discovers lots of special lowerings" use-case.
- Conditional case breakdowns for a local pattern. In the example linked above, the idiom was really "for -1, do this; for not -1, do that." This is the sort of thing one would use a
match
for in Rust.An insight I think might be useful is that the second case happens at the local level, while the first case happens at the global level. So I'd like to propose a mechanism that might get us the benefits of both: locally-ordered groups of rules.
Specifically I'm thinking of some sort of special form that groups a collection of rules into one unit and implies a strong precedence between them. In other words, if the first rule's LHS applies, then the first rule must be chosen over the other rules. Something like (forgiving some arbitrary syntax invention; we can bikeshed later):
(group (rule (my_helper (Enum.A) 1)) (rule (my_helper (Enum.B) 2)) (rule (my_helper _ 3))) ;; semantically, returning 3 is not a valid execution if either of above matches
Then any analysis that is attempting to understand and verify rules' meanings in isolation has an explicit directive that these rules must be considered together, and rule
i
must be understood as having a pattern that is "not 0..i-1, AND (original pattern)".We can work this into the trie-construction in the ISLE compiler backend by doing something similar to the priority mechanism. I haven't worked out all of the details here (else I would show a prototype alongside this issue!) but I'm fairly sure it can be done; the only question is how much of the case-merging this will disrupt, but if it is mostly used for helpers or for small groups of rules then this should be reasonable I think.
Thoughts? (cc @fitzgen @abrown @uweigand)
github-actions[bot] commented on issue #3751:
Subscribe to Label Action
cc @cfallin, @fitzgen
<details>
This issue or pull request has been labeled: "isle"Thus the following users have been cc'd because of the following labels:
- cfallin: isle
- fitzgen: isle
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
uweigand commented on issue #3751:
I agree this would be useful.
avanhatt commented on issue #3751:
Sorry I'm late to this issue, but I agree this would be useful and probably prevent some roadblocks for verification.
Our current verification prototype builds a list of assumptions from the LHS of each rule. To handle groups of this form we would probbaly just (as suggested) add a negation of the intersection of previous rules' assumptions (e.g.,
(not (and (= arg1 Enum.A) ...))
) to the context of the later rules. In more complicated rules, any other assumptions (e.g.,fits_in_64
) from the LHS would also be included in these terms, but with theand
this should have the intended effect of still allowing later rules to keep those same assumptions unrelated to the match.We would need a few changes to the prototype, but nothing too bad. Some strategy like this will be necessary for verification if, as in @cfallin's example, later rules are semantically incorrect without excluding previous match cases.
cfallin labeled issue #3751:
In conversation recently with several folks, it became apparent that the way in which ISLE rules' precedence is defined can be slightly at odds with intuition, or at least with some common idioms.
There have been cases (e.g. here and several others) where the usual pattern-matching idiom from functional programming (and Rust
match
expressions) of(rule (lower (my_inst P1 P2)) ...) (rule (lower (my_inst P3 _)) ...) (rule (lower (my_inst _ P4)) ...) (rule (lower (my_inst _ _)) ...)
occurs, and the later, more general cases actually must be shadowed by the earlier, more specific cases.
In general, the ISLE design intended to (and the DSL compiler generally does) prioritize specific cases over general cases, so this is fine on the surface, but two problems arise.
First, as we look forward to verification, it would be better if every rule's meaning were independent of other rules. In other words, a more general rule should always be correct where it could be applied. A more specific rule might apply first, and give a better lowering; but any applicable rule (one with a LHS whose pattern matches) should give a correct answer. In a formal sense, we could say that execution of the lowering has multiple possible results (depending on rule application choice), and we require that they all be correct.
Second, while the ordering rules (more specific first) are generally intuitive, they are still DSL compiler behavior that needs to be internalized and incorporated into a mental model. This is cognitive load that is possibly made worse by the fact that Rust pattern-matching semantics do not have this potential reordering by special heuristic.
There are good reasons for the reordering. I had originally proposed a sort of what-you-see-is-what-you-get "the rules apply in order" principle, as if the whole body of rules were one giant
match
. This is extremely simple; there is no misunderstanding what the DSL compiler will do; but as @fitzgen and others pointed out, it's not really compatible with a world where we have tools that generate lots of special-case lowerings and "append" them to the set of rules.In other words, we really do want some top-level sorting that puts more specific lowerings first, and we'd prefer that this not have to happen by literally sorting our source code.
It seems that the root of the problem is that there are two different idioms or use-cases for having multiple rules with overlapping left-hand-side patterns:
- Refinements. A special case can be lowered more efficiently in a certain way, but this refinement isn't necessary for correctness. This is the "tool discovers lots of special lowerings" use-case.
- Conditional case breakdowns for a local pattern. In the example linked above, the idiom was really "for -1, do this; for not -1, do that." This is the sort of thing one would use a
match
for in Rust.An insight I think might be useful is that the second case happens at the local level, while the first case happens at the global level. So I'd like to propose a mechanism that might get us the benefits of both: locally-ordered groups of rules.
Specifically I'm thinking of some sort of special form that groups a collection of rules into one unit and implies a strong precedence between them. In other words, if the first rule's LHS applies, then the first rule must be chosen over the other rules. Something like (forgiving some arbitrary syntax invention; we can bikeshed later):
(group (rule (my_helper (Enum.A) 1)) (rule (my_helper (Enum.B) 2)) (rule (my_helper _ 3))) ;; semantically, returning 3 is not a valid execution if either of above matches
Then any analysis that is attempting to understand and verify rules' meanings in isolation has an explicit directive that these rules must be considered together, and rule
i
must be understood as having a pattern that is "not 0..i-1, AND (original pattern)".We can work this into the trie-construction in the ISLE compiler backend by doing something similar to the priority mechanism. I haven't worked out all of the details here (else I would show a prototype alongside this issue!) but I'm fairly sure it can be done; the only question is how much of the case-merging this will disrupt, but if it is mostly used for helpers or for small groups of rules then this should be reasonable I think.
Thoughts? (cc @fitzgen @abrown @uweigand)
Last updated: Dec 23 2024 at 12:05 UTC