I'm in the process of writing my own cranelift backend for a proprietary bytecode which has signed and unsigned instructions which do overflow checking. Since cranelift doesn't differentiate between signed/unsigned types and operations I'm kind of lost how to preserve that information through the optimization process. Is there a good way to do that, maybe attaching value labels (assuming they are preserved through e-graph and isle shenanigans)? How does the rust cranelift backend do it?
@Jan Katzer what sort of consumer are you needing the information for? In other words, do you have some stage or tooling that needs to know that a particular {value,operator} at the IR level became a particular {register,instruction} at the machine level? That is what debug value labels would give you. For overflow checking and compiled-in checks in general though, I'd expect the IR itself to encode the check and what happens if it fails -- for example, a compare and trap on overflow (iadd
, icmp
the result to either input, if result is less, wraparound/overflow; trapnz
on that 0/1 value). See e.g. how the Wasmtime frontend emits CLIF to implement explicit bounds-checks on heaps for an example of "compiled-in checks"
@Chris Fallin In my specific case I have bytecode instructions which encode the signedness and overflow check as part of the operation, e.g. add16s dst, src
does a signed 16-bit addition and traps if needed. So I would need that information in the isa backend
Ah, sorry, I missed the "new backend" bit; I think the best answer here (or at least, the right one imho) is probably to add new CLIF instructions for these operators
Oh boy, that sounds like an adventure and a half :D
hopefully not too bad! The instructions are defined in cranelift/codegen/meta/src/shared/instructions.rs; takes a little stanza of a handful of lines for each; most of the compiler shouldn't need to know about them (they'll be considered pure by default which may be a problem if you want trapping ops that are unused to not be optimized out); then use them in lowering rules in your backend
the InstBuilder boilerplate to add them when driving the Cranelift API, and all the glue for ISLE matching rules and such, is generated automatically
The trapping behaviour should not matter too much, since that's transparent to the program (the interpreter just exits early) EDIT: Oh you're right, but I'm fine with unused/optimized operations not trapping, since it's more of a security than a correctness thing
I'm mostly concerned about cranelift then not knowing how to optimize these new opcodes properly
so absent any other newly defined midend rules, Cranelift should treat newly defined instructions as opaque operators -- we don't know what they are, so we won't rewrite them
since they default to "pure" they'll also be subject to code motion, so e.g. GVN (reusing the result of the same trapping op computed earlier) or hoisting out of a loop (LICM, if loop-invariant), but those should be fine too from your description
Makes sense, but as essentially all arithmetic operations would be my own opcodes, that would basically prohibit much if not all of the optimization
the other good news is that you're free to add your own mid-end rules :-)
I can see why you might want to take a different approach and re-use all the rules we have for "normal" adds and such
I guess I'd be a little cautious of that in the face of signed/unsigned overflow semantics (is it still valid to commute operations or rewrite x-y to x+(-y) or ... any other algebraic identity if we want to preserve what would have trapped in the middle of that expression)
Like I said, that's shouldn't be that big of a deal. The overflow checks are mostly for preventing accidental overflow, they aren't really a guaranteed part of the operations semantics. x + (-y)
is fine with me as long as the effective value doesn't cause a robot arm to spin around and decapitate a worker :D
(obligatory disclaimer: Cranelift is not warrantied against decapitation risks; that is, uh, terrifying and I'll keep thinking about formal verification techniques)
one other thing that comes to mind: you could define new "overflow check" operators that wrap the existing operators; so overflow_check(iadd(x, y), x, y)
or something of the sort
with the semantics that these check-opcodes pass through the result value, but also potentially have the side-effect of trapping
this allows existing rules in the mid-end to fire, though we'll miss any rewrites that match on trees (iadd(x, iadd(y, z))
sort of thing) because of the check ops wrapping each layer
but with additional rewrite rules the "middle" check ops could be removed, depending on what precise trapping semantics you want
Hm, that might be a good idea, though I also would have to tinker with basically all rewrite rules
finally to your original point/direction, I can imagine a framework that basically carries signedness and boundedness information on the side; new array in CLIF (SecondaryMap<Value, SignAndBound>
or somesuch), and accessors to use that in lowering rules. I'd probably suggest that over trying to use debug value labels since the latter... mostly work, but aren't really meant to be used in a "100% precision lossless type system" sort of way
Yeah, I figured as much
anyway best of luck with your (properly signed and bounded) robots!
This might be the easiest route. I would have to tinker with the egraph/rewrite/whatever to preserve that info, but easier then to rewrite all rules :D
Thanks! :D
Last updated: Nov 22 2024 at 17:03 UTC