Hi folks. We've a framework we created for building and testing multiple non-web based WASM engines, it gathers informaiton on jitter, latency, and compares various algortthms between the differning engines and against native speed too. We shared our results initial at the WASM Research event last year. It's taken a while, but we've got the necessary internal approval to share this framework and the tests as an Apache2 licensed open source contribution.
I know there are al whole bunch of folks doing performance testing. We're hoping this would help toward to wider community goal of having a shared set of tests and a framework to run time. It was originally created by us at Siemens as a home grown way to provide some repeatable, cross engine performance tests. To allow us to better understand the timing, jitter, and potential real time use cases for WASM and the various runtimes. The framework accounts for the different engines and all the possible ways in which they could be configured, for for WAMR - it covers AoT, JIT, Fast Interpreter, and Standard Interpreter.
I've a question for the community, should we share this inital contribution of a framework as a Bytecode Alliance project, or just share on GitHub ?
As I see this contribution merging into a the wider effort of a cross engine performance testing framework. Is this something we in the Bytecode Alliance would like to have?
Are there any other efforts within the alliance to create this type of cross-enginer performance analysis? -- I know the WAMR team have thier own inhouse analysis, and I'd guess that our colleagues in Wasmtime have something similar...
@Chris Woods -- as a first question, how does it compare to Sightglass? This is our performance-testing framework for Wasmtime/Cranelift and some folks have worked on runners for other engines as well
I'll link to this thread from last week as well, discussing our thoughts on cross-engine comparisons in general
@Chris Fallin - good question, the honest answer is that I don't know.... We didn't look at sightglass. Our goal was to create the scaffolding to download and build multiple engines, with multiple build configurations, then run a test of tests on each, gather the results. The repeat multiple times, a current test run gathers upwards of 50k data points. The framework of course comes with tests we've written which represent some of the types of algorithms we'd see used in our products today, but that are not limited, new test cases can of course be added.
Our inital thought was perhaps the framework and scaffolding would be of more initial use than the test cases themselves....
To be fair, I'm not sure of sightglasses capabilities, so I'd need to learn, and I'm happy too. Perhaps this is already covered?
Thanks for the link, I missed it, so I'll check it out.
That would be the place to start at least, if the question is "would the BA like this", because that's our existing benchmarking answer :-)
I do think there's possibly lots of value in what you have, at least in the benchmarks -- we can always use more relevant use-cases. That's speaking just for myself though, I don't represent BA interests as a whole!
Thanks @Chris Fallin , I'll repost my question over there on that thread... ..
Dear future reader, if you get this far, click the link.... and chat to you there... ;p
Last updated: Nov 22 2024 at 17:03 UTC