alexcrichton commented on issue #2018:
This has since been fixed not because I think we did anything but because the layout of jit code is different to prevent the adjacent regions as described above
I think there's probably nothing we can do about this otherwise and as long as we stay our current course in the representation of jit compiler outputs I think we'll be ok.
alexcrichton closed issue #2018:
I noticed recently that jitdump support seems broken on master when used through the wasmtime CLI. I'm pretty perplexed by this, especially because bisection pointed to #1565 as a cause. I think, though, that #1565 as a cause is a red herring, and this has always been a bug. Given this
fib.rs
:fn main() { let n = 42; println!("fib({}) = {}", n, fib(n)); } fn fib(n: u32) -> u32 { if n <= 2 { 1 } else { fib(n - 1) + fib(n - 2) } }
along with this embedding:
use wasmtime::*; fn main() -> Result<(), Box<dyn std::error::Error>> { let mut config = Config::new(); config.profiler(ProfilingStrategy::JitDump)?; let engine = Engine::new(&config); let store = Store::new(&engine); let module = Module::from_file(&engine, "./fib.wasm")?; let instance = Instance::new(&store, &module, &[])?; let start = instance.get_func("main").unwrap(); let start = start.get2::<i32, i32, i32>()?; let f = if false { // changeme Func::wrap(&store, move || { start(0, 0)?; Ok(()) }) } else { let ty = FuncType::new(Box::new([]), Box::new([])); Func::new(&store, ty, move |_, _, _| { start(0, 0)?; Ok(()) }) }; f.call(&[]).unwrap(); Ok(()) }
When this is run as:
$ rustc --target wasm32-unknown-unknown fib.rs -O $ cargo build && perf record -k mono ./target/debug/wasmtime $ perf inject --jit --input perf.data --output perf.jit.data $ perf report --input perf.jit.data
this will not attribute time to
fib::fib
, but instead lots of jit addresses show up:# Overhead Command Shared Object Symbol > # ........ ........ .................. ...........................................................................................................................................................................................> # 2.02% wasmtime [JIT] tid 142064 [.] 0x00007f6d16b4c418 1.73% wasmtime libc-2.31.so [.] __memmove_avx_unaligned_erms 1.34% wasmtime [JIT] tid 142064 [.] 0x00007f6d16b4c422 1.32% wasmtime [JIT] tid 142064 [.] 0x00007f6d16b4c499 1.29% wasmtime wasmtime [.] crossbeam_epoch::internal::Local::pin 1.20% wasmtime wasmtime [.] core::ptr::swap_nonoverlapping_one 1.09% wasmtime wasmtime [.] core::sync::atomic::atomic_load 1.05% wasmtime [JIT] tid 142064 [.] 0x00007f6d16b4c479 1.03% wasmtime [JIT] tid 142064 [.] 0x00007f6d16b4c428
If, however, the line with
//changeme
is switched totrue
then you'll get the profile as expected:# Overhead Command Shared Object Symbol # ........ ........ .................. ........................................................................................................................................................... # 25.77% wasmtime jitted-142410-4.so [.] fib::fib 1.80% wasmtime libc-2.31.so [.] __memmove_avx_unaligned_erms 1.14% wasmtime wasmtime [.] crossbeam_epoch::internal::Local::pin 0.91% wasmtime wasmtime [.] core::sync::atomic::atomic_load 0.91% wasmtime wasmtime [.] core::sync::atomic::atomic_compare_exchange
I'm honestly not entirely sure what's going on. I don't think that our jitdump integration is what's broken. The
*.dump
file emitted looks reasonable,perf inject
appears to work,*.so
files pop up and they've all got functions in them with names as expected. I'm not really sure what's happening here.@jlb6740 do you know more about
perf
perhaps to help debug? It'd be great to know whyperf
is failing to recognize the function in one case but not the other.The only real difference here seems to be what's on the stack, whether it's native versions of trampolines (
Func::wrap
) or runtime-generated versions of trampolines (Func::new
). We don't tell jitdump about the trampolines generated byFunc::new
but after doing that it didn't seem to fix anything. Additionally I think we have unwind info for everything registered, but I'm not even sure why perf would want to unwind since we're not collecting stacks, just samples.Overall I'm at a loss myself and I'm not sure what to do next. I'm hoping @jlb6740 you've got ideas of what might be going on :(
Last updated: Nov 22 2024 at 17:03 UTC