Stream: wasmtime

Topic: Limits on number of instances and modules


view this post on Zulip Matias Funder (Nov 29 2020 at 20:09):

Hello,
I am evaluating Wasmtime and its limits. Using Rust embedding, I have done the following:

Loading a tiny module, and creating and retaining as many instances as possible, the program crashes at instance 21843, 21844, or 21845, apparently using only 3GB of memory. The number doesn't change with reboots, or changing the system memory (e.g. from 8GB to 16GB).
What could cause this? Is there a way to increase this number?

Similarly, I tried loading and retaining a module as many times as possible, and the program crashes at exactly 30,000 modules.
What could cause this? Is there a way to increase this number?

view this post on Zulip Peter Huene (Nov 29 2020 at 21:42):

Each instance reserves 6 GiB of address space by default on 64-bit systems. My guess would be you're running into address space exhaustion with that many concurrent
instances. You can control how much address space is reserved with various settings on Config. See the docs here: https://docs.rs/wasmtime/0.21.0/wasmtime/struct.Config.html#static-vs-dynamic-memory

view this post on Zulip Peter Huene (Nov 29 2020 at 21:43):

The crash you're observing, is it a SIGSEGV or a Rust panic, by the way?

view this post on Zulip Peter Huene (Nov 29 2020 at 21:43):

As I would expect us to be gracefully handling a failed mmap call

view this post on Zulip Peter Huene (Nov 29 2020 at 21:44):

A panic considered graceful in this scenario, perhaps

view this post on Zulip Peter Huene (Nov 29 2020 at 21:46):

I'd like to see a test case for the module creation as that limit sounds internal rather than system constrained

view this post on Zulip Peter Huene (Nov 29 2020 at 21:49):

FYI, I'm currently working on some changes to Wasmtime that will allow for a consistent preallocation of resources for a "pool" of instances with configurable limits which is meant to help with performance for "instance-per-service-request" type use cases

view this post on Zulip Alex Crichton (Nov 29 2020 at 23:50):

By default on my macbook laptop I also capped out at around 21k instances, but I created a store with static_memory_maximum_size set to zero and it's in the 3 million range right now and still counting. The 21k capped out with what I believe was an mmap failure for me, although @Matias Funder may be seeing something different. But yeah as @Peter Huene mentioned there's various knobs to tweak when you scale wasm quite a lot, and the default settings of wasmtime may not always be the best for each situation to scale in

view this post on Zulip Alex Crichton (Nov 29 2020 at 23:52):

well, I killed it around 6M b/c it was taking a hundred GB or so of ram and my laptop only has 16, the poor thing

view this post on Zulip Matias Funder (Nov 30 2020 at 02:59):

The 21k limit causes: thread 'main' panicked at 'called Result::unwrap() on an Err value: Insufficient resources: Cannot allocate memory (os error 12)
It appears to allocate >120 TB virtual memory. Real memory use is about 125kB per instance.
With 16GB and 125kB per instance, the upper limit should be 128k instances.

Running with config.static_memory_maximum_size(0) uses far more real memory, and an amount of virtual memory that is about the same as the real memory.
Unfortunately it dies already at 11.8k.
Real memory use is about 1.2 MB per instance. And ends with: Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
So I am not getting the same result as @Alex Crichton

@Alex Crichton Were you testing with program that didn't import any memory? That would highlight that the problem is indeed with the memory allocation, and not the instances.

Setting config.static_memory_guard_size(20_000); and config.dynamic_memory_guard_size(10_000);
made the program crash at 32k (approx 130 TB virtual memory).

Perhaps it would make sense to have an error message "not enough virtual memory"?

view this post on Zulip Matias Funder (Nov 30 2020 at 03:31):

I also tested with a (module) program and it still used 88kB/instance (which is totally fine... the problem is virtual memory, not real memory usage)

view this post on Zulip Matias Funder (Nov 30 2020 at 05:05):

Peter Huene said:

The crash you're observing, is it a SIGSEGV or a Rust panic, by the way?

The 30k module limit was a mistake on my end. I had generated 30k unique modules so the test would bypass any possible caching, but then increased the test to 50k without correctly generating the new modules.

Loading (and keeping references) for 30k unique tiny modules used almost no memory (~100-300 Mb), so that's great.

view this post on Zulip Alex Crichton (Nov 30 2020 at 15:04):

Hm interesting! I suspect different OSes have different behavior here. @Matias Funder could you share how you're measuring the overhead of each instance?


Last updated: Nov 22 2024 at 16:03 UTC