cfallin opened PR #7965 from cfallin:pcc-range-fix
to bytecodealliance:main
:
Previously, the proof-carrying code (PCC) mechanism optionally described each value in the program with at most one fact of the form:
- A static range (min, max are
u64
s);- A dynamic range (min, max are symbolic expressions: sums of global values or SSA values and offsets)
This worked well enough in filetests that exercised PCC for static and dynamic Wasm memory cases: the fact language was expressive enough to describe all the invariants.
However, as soon as the optimizer started to combine different accesses together -- for example, by sharing one
select_spectre_guard
across multiple accesses -- it quickly became apparent that we might need to describe both a static and dynamic range for one value. The existing system fails to check, producingConflict
facts, on the new test inpcc_memory.rs
here.To make the fact language more expressive, I worked through a series of more expressive variants until finding one that seems to work:
First, a variant with combined static and dynamic ranges: e.g.,
range(0, 0xffff, 0, gv1)
means that a value is within the given static range and also less than or equal togv1
. This allows the intersection of dynamic and static facts to work, but has a lot of weird edge-cases because it's like two analyses glued together in a product type; we really want to cross-compare against the two sometimes, e.g. if we know static range facts about the symbolic expressions and want to apply those elsewhere. It also led to weird logic due to redundancy: the expressions could also be constants (no "base value") and so we handled the constant-value case twice. It seemed that actually the two worlds should be merged completely.Next, a variant with only
Expr
s, and two cases for a range:Exact
(with one or more expressions that are known to be equivalent to the value) andInclusive
, withmin
andmax
lists. In both cases we want lists because we may know that a value is, for example, less than bothv1
andgv2
; both are needed to prove different things, and the relative order of the two is not known so it cannot be simplified.This was almost right; it fell apart only when working out
apply_inequality
where it became apparent that we need to sometimes state that a value is exactly equal to some expressions and less than others (e.g., exactlyv1
and also in a 32-bit range).Aside from that it was also a bit awkward to have a four-case (or three-case for commutative) breakdown in all ops: exact+exact, exact+inclusive, inclusive+inclusive.
Finally, the variant in this PR: every range is described by three lists, the
min
,equal
andmax
sets of expressions.The way this works is: for any value for which we have a fact, we collect lower and upper bounds, and also expressions we know it's equivalent to. Like an egraph, we don't drop facts or "simplify" along the way, because any of the bits may be useful later. However we don't explode in memory or analysis time because this is bounded by the stated facts: we locally derive the "maximum fact" for the result of an addition, then check if it implies the stated fact on the actual result, then keep only that stated fact.
The value described by these sets is within the interval that is the intersection of all combinations of min/max values; this makes
intersect
quite simple (union the sets of bounds, and the equalities, because it must be both). Some of the other ops and tests --union
, and especially "is this value in the range" or "does this range imply this other range" -- are a little intricate, but I think correct. To pick a random example: an expression is within a range if we can prove that it is greater than or equal to all lower bounds, and vice-versa for upper bounds; OR if it is exactly equal to one of the equality bounds. Equality is structural onExpr
s (i.e., the default derivedEq
is valid) because they are not redundant: there is only one way to expressv1+1
, and we can never prove thatv1 == v2
within the context of one expression.I will likely write up a bunch more in the top doc-comment and throughout the code; this is meant to get the working system in first. (I'm also happy to do this as part of this PR if preferred.)
There are also some ideas for performance improvement if needed, e.g. by interning
ValueRange
s and then memoizing the operations (intersect(range2, range5) = range7
in a lookup table). I haven't measured perf yet.I also haven't fuzzed this yet but will do so and then submit any required bugfixes separately. Hopefully we can get this turned on soon!
cfallin requested abrown for a review on PR #7965.
cfallin requested wasmtime-compiler-reviewers for a review on PR #7965.
cfallin requested alexcrichton for a review on PR #7965.
cfallin requested wasmtime-core-reviewers for a review on PR #7965.
cfallin requested fitzgen for a review on PR #7965.
cfallin updated PR #7965.
fitzgen submitted PR review:
Overall LGTM, just a few questions and general suspicion flagging.
But most importantly: shouldn't there be a new clif filetest or two for the GVN behavior that warranted this fact representation change?
fitzgen submitted PR review:
Overall LGTM, just a few questions and general suspicion flagging.
But most importantly: shouldn't there be a new clif filetest or two for the GVN behavior that warranted this fact representation change?
fitzgen created PR review comment:
It seems like
Max
should be a variant ofExpr
rather thanBaseExpr
? Because it seems like offsets shouldn't matter once we reachMax
? So having both a non-zero offset and a base ofMax
seems like something we should make unrepresentable.
fitzgen created PR review comment:
Why did this need to become an
i128
? That's kind of scary given that we have smallvecs ofExpr
s.
fitzgen created PR review comment:
What if
lhs.base == BaseExpr::Max
butlhs.offset > rhs.offset
?
fitzgen created PR review comment:
The
O(n^2)
loop is a little scary here but I guess in practice this is sized at most by the number of unique values we GVN together? Which seems like it should generally be relatively small? But also something that is ~unbounded and user-controllable :grimacing:But I guess they aren't controlling the actual facts used during a wasm load and the inputs to an
i32.load
instruction are basically anyi32
value. So I think this is maybe okay?Seems like something we should think about and have a comment for calming readers down.
fitzgen created PR review comment:
I guess let's just get something working first and worry about performance later, but having a bunch of smallvecs is a little concerning. In general, we try to keep all the guts of the data in
DataFlowGraph
beCopy
and use things likecranelift_entity::EntityList
to manage non-POD data within. But yeah, we can come back to this in the future.
fitzgen created PR review comment:
This feels like a suspicious operation. Rounding down seems sus when we are talking about proving that accesses are in-bounds!
fitzgen created PR review comment:
Seems like this could be cleaned up with https://doc.rust-lang.org/nightly/std/fmt/struct.Formatter.html#method.debug_set
fitzgen created PR review comment:
Interesting that we can return any of
equal
and be correct here. Does it matter? How are callers using this method? Does it make sense to tweak what is being asked here?
cfallin submitted PR review.
cfallin created PR review comment:
Yeah, I agree; efficiency concerns aside (back to that below) it became necessary because given symbolic expressions, the offset has a domain of... about 3/2 of a
u64
: it sometimes needs to bei64
, and sometimes needs to be a fullu64
. I've papered over that with 128 bits, though in theory it needs "only" 65. It does feel generally better (less potentially subtly wrong) to usei128::from(..)
everywhere and be able to perform universal arithmetic, rather than carefully thinking through all the intermedia underflow/overflow cases.
cfallin submitted PR review.
cfallin created PR review comment:
Indeed! I had actually started down that road, but both managing the efficient refs-to-list-arena representation and getting the logic right at the same time was getting to be a bit too much; so I wanted to get it working first. As mentioned at the end of the commit message I do think we can do something at a high level: intern
ValueRange
s and then memoize operations over them (which addresses the quadratic bit below too).
cfallin created PR review comment:
Indeed! It's specifically for lower bounds: when scaling up, we want lower bounds to approximate downward, not upward. I'll add that use-case to the comment.
cfallin submitted PR review.
cfallin edited PR review comment.
cfallin updated PR #7965.
cfallin submitted PR review.
cfallin created PR review comment:
Good point on this and below; reworked so there is no
BaseExpr
andExpr
is now the enum withMax
as a variant.
cfallin submitted PR review.
cfallin created PR review comment:
Good idea, fixed!
cfallin submitted PR review.
cfallin created PR review comment:
Added some comments regarding this to help clarify.
cfallin submitted PR review.
cfallin created PR review comment:
Added some notes on offset sizes here as well.
cfallin submitted PR review.
cfallin created PR review comment:
I played with this a bit but (i) I want the single-item case to be bare (no braces) to avoid cluttering up the syntax (otherwise e.g.
range(64, {0}, {255})
) and (ii)debug_set
takes theDebug
of elements but I want theDisplay
; could adapt for both but then it's arguably more and weirder code than the above (to my eyes at least!).
cfallin submitted PR review.
cfallin created PR review comment:
So, this is a very interesting point: I was able to control the number of
equal
expressions by writing a bunch of addresses that simplify to the same value. (E.g., loads fromp
andp + (4-4)
.) Lower and upper bounds on the other hand are not user-controllable as the dynamic ones arise from memories and any given access only touches one memory (and static ones are simplified so that only one lower and one upper remain). Great catch and thank you for thinking of this!The need for "equal" constraints arises from the transitivity -- we need to represent the values
v1
andgv1
when we seeicmp gte v1, gv1
-- but I suspect we might be able to get away with at most one. This does mean we lose information sometimes but...(A possible sketch of a strategy: when we merge, always keep the "lesser" expression per the arbitrary lexicographical ordering; as long as we're consistent, we keep the comparison info for
v1
vs.gv1
but notv2
vs.gv1
, across all related facts, and all related facts should merge if we have two loads of addressesv1
andv2
that ended up merging during opt.)
cfallin updated PR #7965.
cfallin commented on PR #7965:
With some more testing I've found the equality mechanism to be a little fragile (specifically around constants, as well as the above-mentioned quadratic representation for multiple merged SSA vals), and I need to go have a think; so I'm gonna switch this to "draft" mode.
fitzgen created PR review comment:
(A possible sketch of a strategy: when we merge, always keep the "lesser" expression per the arbitrary lexicographical ordering; as long as we're consistent, we keep the comparison info for
v1
vs.gv1
but notv2
vs.gv1
, across all related facts, and all related facts should merge if we have two loads of addressesv1
andv2
that ended up merging during opt.)This sounds very reasonable to me.
fitzgen submitted PR review.
cfallin commented on PR #7965:
After a little more thought around both the specific last issues seen here, and the efficiency in general of the larger facts, I had the above-mentioned think, and now I have a thought on a new approach that I'd like to document here before (unfortunately) having to context-switch away for a bit.
To recap, the current state is:
main
today has PCC that works fine for static checks (memory with sufficiently large guard regions); the range facts are compactCopy
structs that describe statically-known integer bounds.The difficulties arise when modeling symbolic expressions and comparisons between them that arise with dynamic checks -- specifically, tying together a compare with a conditional-select -- when this interacts either with optimizations merging things together or with constant values.
In my view, these difficulties are symptoms of trying to represent a "slice" of the overall worldview (partial order relation between program values) in facts on individual values. That is, from the perspective of an individual value, we have to carry around its inequalities with a bunch of other values in case we need them. Furthermore, because facts are attached to values, when constants are involved, we can lose track of things: a constant value can be rematerialized or incorporated in a bunch of different places and does not have a stable identity like an SSA value
v1
does.So, IMHO a reasonable solution might be to build a "whole worldview" data structure, and share it across the function. One can see this as taking this PR a bit further: this PR lets a fact on a single value talk about multiple relations (so single facts go from atomic tidbits to slices of the whole world); instead let's build "the whole world" and then point into it.
To be more concrete, I envision a "partial order relation DAG" with nodes that represent sets of equal values, either symbolic (SSA values, global values) or concrete (integer constants), and edges that represent less-than-or-equal relations. Edge labels represent known "distance" (e.g.
5
on edge fromx
toy
meansx + 5 <= y
), and edges can also be labeled with predicates that make them true (valuev1
, the result of anicmp
, impliesv2 <= v3
).The idea is that we can point to any given node in this DAG from a fact on a value in the IR; these DAG nodes don't actually need to track the set of equal values (that's implicit in the set of all nodes that point to them); and we can do a union-find-like thing to efficiently answer less-than queries. Basically we can implement
le(node1, node2, Option<pred>) -> Option<distance>
where we get aSome(min_distance)
if there is some path from one node to another (enabling edges labeled underpred
if present), wheremin_distance
is the sum of edge labels. Then we can compress the path by adding a direct edge (transitivity!) to make further queries more efficient. I suspect this will lead to a fairly efficient overall check of a function body, because the same comparisons will be done over and over. (E.g., different offsets from a base pointer into a struct, all less than the guard-region size.)As mentioned I need to context-switch away for a bit but I'll try to get back to this at some point!
Last updated: Dec 23 2024 at 12:05 UTC