From docs I see that store should be short lived but can I cache Wasmtim::Instance
inside a long-lived (unbounded) object ?
An instance lives within a store; but if you want to cache the work done to instantiate (linking/resolving imports), you can create an InstancePre
and use that to create instances in new stores
thanks @Chris Fallin for the response. What I meant by caching is that if I don't have an instance I would create store and make the instance and then cache it, so that next time I would not even have to make a new store if I already have an instance. By the APIs, I don't see why this can't be done as Instance does require a mutable ref to store but then returns a completely owned instance which should not care store getting dropped (and being short lived).
If you throw away the Store you won't be able to do anything useful with the Instance; calling into it will require the Store again.
the purpose of InstancePre is that it makes the process of instantiation extremely fast and low-cost.
Ohh right @Lann Martin . Sorry, I overlooked that. Then I believe using InstancePre
is the way to it! Thanks @Chris Fallin for the suggestion and thanks @Lann Martin for pointing this out.
if you're going to reuse something including all the mutable state (i.e. linear memory, globals, tables) you should give ownership of of (Store, Instance) in some datastructure and then take it back out of there (getting ownership of store back) later so you can use it
if you arent trying to reuse the state, then you should drop the store and create a new one.
Pat Hickey said:
the purpose of InstancePre is that it makes the process of instantiation extremely fast and low-cost.
So this is infact used for performance gains ? I am looking exactly for that and seemed that Instance creation if it's frequent might be dragging the performance down.
InstancePre.instantiate is the fastest possible path to an instance. its not zero-cost - nothing really is - but its as low cost as we know how to make it.
ohh okay, makes sense. Can instance allocation be also useful along with this ?
for e.g. cloud services that make a new instance of the same module or component on every http request, we use a pooling allocator (setup in config, docs there) and InstancePre to minimize instantiation time
but interally it uses mutex which again is posing a bottleneck
yes, the idea of a pooling allocator is that it amortizes as many of the memory setup steps up front as possible
what uses a mutex, and where?
Pat Hickey said:
yes, the idea of a pooling allocator is that it amortizes as many of the memory setup steps up front as possible
pooling allocator uses mutex: https://github.com/bytecodealliance/wasmtime/blob/main/crates/wasmtime/src/runtime/vm/instance/allocator/pooling/index_allocator.rs
You are almost certainly bottlenecking on memory management before any mutex contention
yes thats using a mutex so that slot bookeeping is maintained correctly. the critical section is just on manipulation of the slot datastructure (Inner) and not on any other part of initialization
agreed with lann, we have never seen a system where that is a bottleneck
ohh okay, so let me use InstancePre
and run my benchmarks again. Thanks again people!
wasmtime has many large production users that depend on that path being as efficient as possible, and we put a ton of engineering into making it as fast as we can
yeah @Pat Hickey absolutely :100: In fact I got to this pooling allocator through one of the official blogs: https://bytecodealliance.org/articles/wasmtime-10-performance which itself is quite interesting. Thanks again for the help, will try out the pooling allocator + InstancePre and see how it plays out.
I just have one doubt, my engine has async config
over 2 years ago. time flies!
I just have one doubt, my engine has async config
Not a problem
everything will be fine in async, all of the prod users are using async too
ohh okay, perfect! let me try then. Thanks again for the help and the amazing Wasmtime project!
Last updated: Nov 22 2024 at 17:03 UTC