Stream: cranelift

Topic: Does cranelift aim to be optimizing compiler backend?


view this post on Zulip walker (May 26 2022 at 09:13):

I got to know cranelift by looking at rustc codebase and I have a basic question here. is cranelift aiming to replace LLVM at some point as to having almost the same features but working a bit faster and being completely in rust?

I noticed that cranelift mostly is advertised by "faster" than llvm, but nothing (that I could find) mentions the performance of cranelift generated binaries when compared to LLVM.
So my question here, will be safe to use cranelift as compiler IR/backend and assume that eventually it will catch up with LLVM both in archs support and optimizations? (It will be cool to see riscv and aarch64 backend and modular pass manager)

view this post on Zulip Chris Fallin (May 26 2022 at 16:22):

Hi @walker , this is a great question, thanks for asking!

Cranelift aims to be an optimizing compiler, but the best peers to compare to are probably the optimizing tiers of browsers' JIT engines; so V8's TurboFan and SpiderMonkey's IonMonkey.

LLVM spends a ton of effort (compile time) on optimization, and has engineer-decades of time spent optimizing that; and O(dozens) of people, at least, actively working on it. We're a much much smaller project (~one-ish fulltime person on the compiler core, a few on backends) so we probably won't ever generate code that is completely at parity with LLVM's output. But optimization is very much a "most of the benefit for a subset of the effort" kind of problem, and we anticipate that we can get close. We have active efforts to incorporate more optimizations.

The most recent comparison of Cranelift perf vs LLVM (wasmtime vs WAVM that uses LLVM, specifically) that I'm aware of is in Fig 22 of this paper: https://arxiv.org/pdf/2011.13127.pdf. The orange bar (Wasmtime with Cranelift) is just a hair slower (a few percent?) than V8/TurboFan and maybe ~10% slower (eyeballing the gap) than WAVM with LLVM. We have ongoing efforts to build continuous benchmarking infrastructure; @Andrew Brown and @Johnnie Birch are driving this with our "Sightglass" benchmarking tool.

Re: architecture support: we actually already support AArch64! (since Apr 2020, at parity with full SIMD since sometime last year) We have three backends: x86-64, aarch64, s390x. We'd also love to have RISC-V (32 and 64), ARM32, x86-32, ppc64, ... it's just a matter of time. Each of these is a few months of fulltime work and then ongoing maintenance and we don't have anyone to spare for that right now.

One final thing I'll say is that we have a focus on formal verification and safety that IMHO is a bit more explicit/first-class than in LLVM; e.g. we're actively working with some academic folks to formally verify our instruction selector, and our register allocator symbolically verifies its results (if that option is enabled).

I wrote up a doc describing Cranelift's unique focuses last year, but it was waiting on some other Bytecode Alliance stuff to come together before we release it; cc @Till Schneidereit , it may be a good time to reconsider getting that out!

view this post on Zulip fitzgen (he/him) (May 26 2022 at 16:36):

You may find this interesting, although it doesn't talk about performance: https://github.com/bytecodealliance/wasmtime/blob/main/cranelift/docs/compare-llvm.md

I noticed that cranelift mostly is advertised by "faster" than llvm, but nothing (that I could find) mentions the performance of cranelift generated binaries when compared to LLVM.

FWIW, these claims are about the time it takes the compiler to generate code, not claims about the speed of the generated code emitted by the compiler.

Standalone JIT-style runtime for WebAssembly, using Cranelift - wasmtime/compare-llvm.md at main · bytecodealliance/wasmtime

view this post on Zulip Chris Fallin (May 26 2022 at 16:38):

Ah, there are some outdated bits in that doc; I'll make a note to update it when I get a chance

view this post on Zulip Chris Fallin (May 26 2022 at 16:49):

Re: "faster" as compile time -- referring to the same paper above, Fig 18 shows a log-scale plot of compile times for the CoreMark benchmark suite. The orange bar (Wasmtime on Cranelift) is ~10x faster than the purple bar (WAVM on LLVM). Since the version of Cranelift used in that paper, things have gotten a little faster as well (new regalloc)

view this post on Zulip walker (May 26 2022 at 19:02):

Alright, thank you so much for the detailed explanation, I am surprised that you are concerned with formal verification right from the start, that is indeed very interesting! To be honest I was planning to use cranelift for some experimental language Higher order logic language design as a codgen back end and hopefully stick with it when my toy language becomes less experimental. and I think I will move forward with it since the performance hit is not much at the time being, and it will be nicer to write additional optimization passes in rust than do it in C++.

view this post on Zulip Alphyr (May 27 2022 at 16:03):

Regarding RISC-V support, there is some work done here: https://github.com/yuyang-ok/wasmtime

Standalone JIT-style runtime for WebAssembly, using Cranelift - GitHub - yuyang-ok/wasmtime: Standalone JIT-style runtime for WebAssembly, using Cranelift

Last updated: Oct 23 2024 at 20:03 UTC