Is anyone aware of any (wasm-specific) reason that a "standard" (bitwise-op-based) constant-time string equality check would not be constant-time? I can't think of any but just wanted to poll the hive mind in case there was something surprising here.
Optimizing compilers like LLVM tend to be surprisingly effective at optimizing constant-time operations back into variable-time operations. But that is the case for native compilation too.
Right, hence "(wasm-specific)"; most generic optimizations would presumably apply equally well to x86
I guess a different way to frame this question is: how nervous should I be about a typical constant-time equality check ported to Wasm in - say - the next 3 years?
Cranelift will translate a typical inner-loop-with-some-arithmetic-depending-on-per-loop-loads into machine code more or less 1-to-1; so you're down to whatever the ISA says about constant-time operations
and then for say x86 in practice all the relevant ops are constant-time (loads, compares), but that's not guaranteed by the ISA
so if you're comfortable with an "in practice it's fine" then the pipeline from Wasm bytecode all the way to microarchitecture should be constant-time; caveats for whatever produces your Wasm bytecode, as bjorn3 says
Oh and these things don't typically have any explicit compares; grabbing a random reasonable-looking example:
int cst_time_memcmp_safest1(const void *m1, const void *m2, size_t n)
{
const unsigned char *pm1 = (const unsigned char*)m1;
const unsigned char *pm2 = (const unsigned char*)m2;
int res = 0, diff;
if (n > 0) {
do {
--n;
diff = pm1[n] - pm2[n];
res = (res & (((diff - 1) & ~diff) >> 8)) | diff;
} while (n != 0);
}
return ((res - 1) >> 8) + (res >> 8) + 1;
}
There will of course be some kind of branch at the call site which could still be used to optimize.
Last updated: Dec 06 2025 at 05:03 UTC