At the last Wasmtime meeting we talked about setting up a discussion about Sightglass-related stuff: CI infrastructure, a native runner, runners for other engines (e.g. v8), more benchmarks, etc. There are existing issues in the Sightglass repository that need finishing as well as new things that I (and perhaps others) would find useful. I think @fitzgen (he/him) is out of town right now and I won't be working for the next two weeks, but I am trying to follow up on that discussion. I propose we meet in January on Zoom to discuss some of this in a forum other than the Wasmtime meeting. Who wants to be involved in planning and implementing the next stage of Sightglass?
cc: @Chris Fallin, @Till Schneidereit, @Dan Gohman, @Alex Crichton, @Johnnie Birch
I'm happy to join in any discussions about benchmarking, etc -- thanks for the planning initiative here! Also out for the next two weeks (as I guess most of us are) but a Zoom to discuss in the first week of Jan sounds good
Agreed with Chris: thank you for kicking off the planning for this! :heart: A call in early January sounds great to me. And while I'd be interested in joining, that might make it more challenging to actually organize, so it shouldn't block on me :smile:
I'm happy to join!
yes, happy to have a call
How does this Friday at 10am sound? I'll set up a Teams meeting if no one minds too much (but if you do, I think someone else will have to create the Zoom invite).
Works for me! I'm happy to create a Zoom meeting for this (I've had issues with joining Teams meetings in the past from this laptop)
Ok, I sent an invite to everyone who said something here but I was guessing at some of the Fastly e-mail addresses so let me know if you didn't get it.
We could also mention this in the Wasmtime meeting tomorrow...
Has a meeting been already set up? I might be interested in attending too.
Oh, shoot... it just happened yesterday. Why don't you private-message me your e-mail and I'll send you the notes?
I am trying to figure out how to run sightglass natively and also potentially how to integrate wamr, if possible. Do you have any guides to this?
When you say "natively" do you mean "run Wasm modules in a native-compiled Sightlgass binary" or "run native code in a native-compiled Sightglass binary"? The Sightglass CLI should always compile natively on your machine...
If you're looking for the latter, @Johnnie Birch merged https://github.com/bytecodealliance/sightglass/pull/228 to do this kind of thing. I believe https://github.com/bytecodealliance/sightglass/tree/main/engines/native may be of some help but I'm sure he'd be interested to hear if something you were trying did not work.
As for creating a Sightglass engine to benchmark WAMR, I would advise looking at the wasmtime-bench-api
crate: https://github.com/bytecodealliance/wasmtime/blob/main/crates/bench-api/src/lib.rs. This crate builds to a shared library exposing some symbols (wasm_bench_create
, wasm_bench_compile
, etc.) that Sightglass uses for driving the benchmark compilation, instantiation, and execution.
@Mats Brorsson .. Hi Matt, depending on what you mean by run natively the patch Andrew pointed to may help. It allows a benchmark that has been compiled to a native target (not the Wasm target) and linked with sightglass to be run through sightglass and with a similar report out as to what the Wasm benchmarking shows. Its just the initial patch though and currently is only supported by the shootout subset of benchmarks.
Another question about sightglass. We have run it on x86_64 (laptop), ARM (Nvidia jetson board) and on RISC-V (VisionFive2 board). The number of cycles on x86_64 across all phases (compilation, instantiation and execution) is much higher than for ARM and RISC-V. I ran sightglass with pinned processes and the only difference is I used processes = 10
on the laptop and processes = 4
on the smaller boards as they had less cores and less memory. Would that affect the results?
It should not, no. But maybe look at some other measure kinds (--measure
) to get a sense for what is going on.
Last updated: Dec 23 2024 at 13:07 UTC