Stream: git-wasmtime

Topic: wasmtime / PR #7965 Proof-carrying code: change range fac...


view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 19:57):

cfallin opened PR #7965 from cfallin:pcc-range-fix to bytecodealliance:main:

Previously, the proof-carrying code (PCC) mechanism optionally described each value in the program with at most one fact of the form:

This worked well enough in filetests that exercised PCC for static and dynamic Wasm memory cases: the fact language was expressive enough to describe all the invariants.

However, as soon as the optimizer started to combine different accesses together -- for example, by sharing one select_spectre_guard across multiple accesses -- it quickly became apparent that we might need to describe both a static and dynamic range for one value. The existing system fails to check, producing Conflict facts, on the new test in pcc_memory.rs here.

To make the fact language more expressive, I worked through a series of more expressive variants until finding one that seems to work:

The way this works is: for any value for which we have a fact, we collect lower and upper bounds, and also expressions we know it's equivalent to. Like an egraph, we don't drop facts or "simplify" along the way, because any of the bits may be useful later. However we don't explode in memory or analysis time because this is bounded by the stated facts: we locally derive the "maximum fact" for the result of an addition, then check if it implies the stated fact on the actual result, then keep only that stated fact.

The value described by these sets is within the interval that is the intersection of all combinations of min/max values; this makes intersect quite simple (union the sets of bounds, and the equalities, because it must be both). Some of the other ops and tests -- union, and especially "is this value in the range" or "does this range imply this other range" -- are a little intricate, but I think correct. To pick a random example: an expression is within a range if we can prove that it is greater than or equal to all lower bounds, and vice-versa for upper bounds; OR if it is exactly equal to one of the equality bounds. Equality is structural on Exprs (i.e., the default derived Eq is valid) because they are not redundant: there is only one way to express v1+1, and we can never prove that v1 == v2 within the context of one expression.

I will likely write up a bunch more in the top doc-comment and throughout the code; this is meant to get the working system in first. (I'm also happy to do this as part of this PR if preferred.)

There are also some ideas for performance improvement if needed, e.g. by interning ValueRanges and then memoizing the operations (intersect(range2, range5) = range7 in a lookup table). I haven't measured perf yet.

I also haven't fuzzed this yet but will do so and then submit any required bugfixes separately. Hopefully we can get this turned on soon!

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 19:57):

cfallin requested abrown for a review on PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 19:57):

cfallin requested wasmtime-compiler-reviewers for a review on PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 19:57):

cfallin requested alexcrichton for a review on PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 19:57):

cfallin requested wasmtime-core-reviewers for a review on PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 19:57):

cfallin requested fitzgen for a review on PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:06):

cfallin updated PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen submitted PR review:

Overall LGTM, just a few questions and general suspicion flagging.

But most importantly: shouldn't there be a new clif filetest or two for the GVN behavior that warranted this fact representation change?

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen submitted PR review:

Overall LGTM, just a few questions and general suspicion flagging.

But most importantly: shouldn't there be a new clif filetest or two for the GVN behavior that warranted this fact representation change?

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

It seems like Max should be a variant of Expr rather than BaseExpr? Because it seems like offsets shouldn't matter once we reach Max? So having both a non-zero offset and a base of Max seems like something we should make unrepresentable.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

Why did this need to become an i128? That's kind of scary given that we have smallvecs of Exprs.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

What if lhs.base == BaseExpr::Max but lhs.offset > rhs.offset?

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

The O(n^2) loop is a little scary here but I guess in practice this is sized at most by the number of unique values we GVN together? Which seems like it should generally be relatively small? But also something that is ~unbounded and user-controllable :grimacing:

But I guess they aren't controlling the actual facts used during a wasm load and the inputs to an i32.load instruction are basically any i32 value. So I think this is maybe okay?

Seems like something we should think about and have a comment for calming readers down.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

I guess let's just get something working first and worry about performance later, but having a bunch of smallvecs is a little concerning. In general, we try to keep all the guts of the data in DataFlowGraph be Copy and use things like cranelift_entity::EntityList to manage non-POD data within. But yeah, we can come back to this in the future.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

This feels like a suspicious operation. Rounding down seems sus when we are talking about proving that accesses are in-bounds!

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

Seems like this could be cleaned up with https://doc.rust-lang.org/nightly/std/fmt/struct.Formatter.html#method.debug_set

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:48):

fitzgen created PR review comment:

Interesting that we can return any of equal and be correct here. Does it matter? How are callers using this method? Does it make sense to tweak what is being asked here?

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:58):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 21:58):

cfallin created PR review comment:

Yeah, I agree; efficiency concerns aside (back to that below) it became necessary because given symbolic expressions, the offset has a domain of... about 3/2 of a u64: it sometimes needs to be i64, and sometimes needs to be a full u64. I've papered over that with 128 bits, though in theory it needs "only" 65. It does feel generally better (less potentially subtly wrong) to use i128::from(..) everywhere and be able to perform universal arithmetic, rather than carefully thinking through all the intermedia underflow/overflow cases.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 22:00):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 22:00):

cfallin created PR review comment:

Indeed! I had actually started down that road, but both managing the efficient refs-to-list-arena representation and getting the logic right at the same time was getting to be a bit too much; so I wanted to get it working first. As mentioned at the end of the commit message I do think we can do something at a high level: intern ValueRanges and then memoize operations over them (which addresses the quadratic bit below too).

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 22:00):

cfallin created PR review comment:

Indeed! It's specifically for lower bounds: when scaling up, we want lower bounds to approximate downward, not upward. I'll add that use-case to the comment.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 22:00):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 20 2024 at 22:02):

cfallin edited PR review comment.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:45):

cfallin updated PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:45):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:45):

cfallin created PR review comment:

Good point on this and below; reworked so there is no BaseExpr and Expr is now the enum with Max as a variant.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:45):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:45):

cfallin created PR review comment:

Good idea, fixed!

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:45):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:45):

cfallin created PR review comment:

Added some comments regarding this to help clarify.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:46):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:46):

cfallin created PR review comment:

Added some notes on offset sizes here as well.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:51):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 02:51):

cfallin created PR review comment:

I played with this a bit but (i) I want the single-item case to be bare (no braces) to avoid cluttering up the syntax (otherwise e.g. range(64, {0}, {255})) and (ii) debug_set takes the Debug of elements but I want the Display; could adapt for both but then it's arguably more and weirder code than the above (to my eyes at least!).

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 03:07):

cfallin submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 03:07):

cfallin created PR review comment:

So, this is a very interesting point: I was able to control the number of equal expressions by writing a bunch of addresses that simplify to the same value. (E.g., loads from p and p + (4-4).) Lower and upper bounds on the other hand are not user-controllable as the dynamic ones arise from memories and any given access only touches one memory (and static ones are simplified so that only one lower and one upper remain). Great catch and thank you for thinking of this!

The need for "equal" constraints arises from the transitivity -- we need to represent the values v1 and gv1 when we see icmp gte v1, gv1 -- but I suspect we might be able to get away with at most one. This does mean we lose information sometimes but...

(A possible sketch of a strategy: when we merge, always keep the "lesser" expression per the arbitrary lexicographical ordering; as long as we're consistent, we keep the comparison info for v1 vs. gv1 but not v2 vs. gv1, across all related facts, and all related facts should merge if we have two loads of addresses v1 and v2 that ended up merging during opt.)

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 07:18):

cfallin updated PR #7965.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 07:23):

cfallin commented on PR #7965:

With some more testing I've found the equality mechanism to be a little fragile (specifically around constants, as well as the above-mentioned quadratic representation for multiple merged SSA vals), and I need to go have a think; so I'm gonna switch this to "draft" mode.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 15:43):

fitzgen created PR review comment:

(A possible sketch of a strategy: when we merge, always keep the "lesser" expression per the arbitrary lexicographical ordering; as long as we're consistent, we keep the comparison info for v1 vs. gv1 but not v2 vs. gv1, across all related facts, and all related facts should merge if we have two loads of addresses v1 and v2 that ended up merging during opt.)

This sounds very reasonable to me.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 21 2024 at 15:43):

fitzgen submitted PR review.

view this post on Zulip Wasmtime GitHub notifications bot (Feb 23 2024 at 19:45):

cfallin commented on PR #7965:

After a little more thought around both the specific last issues seen here, and the efficiency in general of the larger facts, I had the above-mentioned think, and now I have a thought on a new approach that I'd like to document here before (unfortunately) having to context-switch away for a bit.

To recap, the current state is:

In my view, these difficulties are symptoms of trying to represent a "slice" of the overall worldview (partial order relation between program values) in facts on individual values. That is, from the perspective of an individual value, we have to carry around its inequalities with a bunch of other values in case we need them. Furthermore, because facts are attached to values, when constants are involved, we can lose track of things: a constant value can be rematerialized or incorporated in a bunch of different places and does not have a stable identity like an SSA value v1 does.

So, IMHO a reasonable solution might be to build a "whole worldview" data structure, and share it across the function. One can see this as taking this PR a bit further: this PR lets a fact on a single value talk about multiple relations (so single facts go from atomic tidbits to slices of the whole world); instead let's build "the whole world" and then point into it.

To be more concrete, I envision a "partial order relation DAG" with nodes that represent sets of equal values, either symbolic (SSA values, global values) or concrete (integer constants), and edges that represent less-than-or-equal relations. Edge labels represent known "distance" (e.g. 5 on edge from x to y means x + 5 <= y), and edges can also be labeled with predicates that make them true (value v1, the result of an icmp, implies v2 <= v3).

The idea is that we can point to any given node in this DAG from a fact on a value in the IR; these DAG nodes don't actually need to track the set of equal values (that's implicit in the set of all nodes that point to them); and we can do a union-find-like thing to efficiently answer less-than queries. Basically we can implement le(node1, node2, Option<pred>) -> Option<distance> where we get a Some(min_distance) if there is some path from one node to another (enabling edges labeled under pred if present), where min_distance is the sum of edge labels. Then we can compress the path by adding a direct edge (transitivity!) to make further queries more efficient. I suspect this will lead to a fairly efficient overall check of a function body, because the same comparisons will be done over and over. (E.g., different offsets from a base pointer into a struct, all less than the guard-region size.)

As mentioned I need to context-switch away for a bit but I'll try to get back to this at some point!


Last updated: Nov 22 2024 at 17:03 UTC