Stream: wasmtime

Topic: memory_pool issue on arch64 and riscv64


view this post on Zulip Mats Brorsson (May 13 2024 at 13:26):

I have an issue with wasmtime on both aarch64 (nvidia jetson nano) and riscv64 (VisionFive 2). I have tried both the Hello WASI HTTP example and a simple spin app with the same result on both platforms. I get a panic when wasmtime tries to mmap:

2024-05-13T13:17:20.823024Z DEBUG wasmtime_runtime::instance::allocator::pooling::memory_pool: creating memory pool: SlabConstraints { expected_slot_bytes: 4294967296, max_memory_bytes: 4294967296, num_slots: 1000, num_pkeys_available: 0, guard_bytes: 2147483648, guard_before_slots: true } ->
SlabLayout { num_slots: 1000, slot_bytes: 6442450944, max_memory_bytes: 4294967296, pre_slab_guard_bytes: 2147483648, post_slab_guard_bytes: 0, num_stripes: 1 } (total: 6444598427648)
Error: failed to create memory pool mapping

Caused by:
   0: mmap failed to reserve 0x5dc80000000 bytes
   1: Cannot allocate memory (os error 12)

aarch64 environment:

jetson@360lab-nano2:~/docker/dockercon$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.6 LTS
Release:        18.04
Codename:       bionic
jetson@360lab-nano2:~/docker/dockercon$ uname -a
Linux 360lab-nano2 4.9.337-tegra #1 SMP PREEMPT Thu Jun 8 21:19:14 PDT 2023 aarch64 aarch64 aarch64 GNU/Linux

riscv64 environment:

ubuntu@ubuntu:~/src$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 24.04 LTS
Release:        24.04
Codename:       noble
ubuntu@ubuntu:~/src$ uname -a
Linux ubuntu 6.8.0-31-generic #31.1-Ubuntu SMP PREEMPT_DYNAMIC Sun Apr 21 01:12:53 UTC 2024 riscv64 riscv64 riscv64 GNU/Linux

For me the huge mmap seems strange, but it happens also on x86_64, and there it seems to work.

Any help is appreciated.

view this post on Zulip Alex Crichton (May 13 2024 at 14:45):

Is overcommit on the aarch64/riscv64 systems turned off perhaps?

view this post on Zulip Lann Martin (May 13 2024 at 14:49):

The default overcommit heuristic isn't really documented either: https://www.kernel.org/doc/html/v5.1/vm/overcommit-accounting.html

It ensures a seriously wild allocation fails

view this post on Zulip Lann Martin (May 13 2024 at 14:51):

sysctl vm.overcommit_memory=1 might fix it but seems like a oversized hammer

view this post on Zulip Mats Brorsson (May 13 2024 at 14:53):

sysctl vm.overcommit_memory shows 0 in all the machines I am working on. Also on the x86_64-machine where this works.
But I will try setting it to 1. It didn't alter the behaviour.

view this post on Zulip Mats Brorsson (May 13 2024 at 17:37):

But what is the reason for allocating 6000 GB address space?

view this post on Zulip Alex Crichton (May 13 2024 at 17:39):

Oh if overcommit_memory doesn't work then I'm not sure what's going on in the kernels here.

If you're doing wasmtime serve, can you try passing -O pooling-allocator=n and see if that works? The 6T address space reservation is the default settings of the pooling allocator. It's not actually allocating that much memory, it's just allocating that much virtual memory

view this post on Zulip Lann Martin (May 13 2024 at 17:41):

-O pooling-allocator=n

Only works on recent main right?

view this post on Zulip Mats Brorsson (May 13 2024 at 17:42):

That works @Alex Crichton !

view this post on Zulip Mats Brorsson (May 13 2024 at 17:42):

I am on version 21.0

view this post on Zulip Alex Crichton (May 13 2024 at 17:42):

@Lann Martin configuring the options without -O pooling-allocator[=y/n] only works on main, but disabling it or passing the -O pooling-allocator option explicitly should work

view this post on Zulip Alex Crichton (May 13 2024 at 17:43):

@Mats Brorsson are you able to share how you installed the aarch64/riscv64 versions of Linux? Are they stock versions of an OS for example? I'd be curious to try to dig in more why this isn't working on those platforms

view this post on Zulip Mats Brorsson (May 13 2024 at 17:45):

For the aarch64 I used nvidias image for the Jetson nano which is now very old. It comes from this site: https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit#write

For riscv64, I got the image directly from this page: https://ubuntu.com/download/risc-v (select VisionFive 2)

Build practical AI applications, AI robots, and more.

view this post on Zulip Mats Brorsson (May 13 2024 at 17:47):

Is there a way to control the allocator used when wasmtime is used as a library, from, e.g. environment variables?

BTW: where are wasmtime's cli options documented? They are not here: https://docs.wasmtime.dev/cli-options.html

view this post on Zulip Lann Martin (May 13 2024 at 17:48):

wasmtime -O help

view this post on Zulip Alex Crichton (May 13 2024 at 17:48):

Currently there's not an env var for this, no, but it can be programmatically controlled if you're working with an embedding (Config::allocation_strategy).

For the CLI options they're currently only documented through the CLI itself as Lann said

view this post on Zulip Alex Crichton (May 13 2024 at 17:50):

If you're able, can you try running -O pooling-allocator-y -O pooling-total-memories=N with a few versions of N?

view this post on Zulip Alex Crichton (May 13 2024 at 17:50):

I'm curious what fails and what doesn't

view this post on Zulip Alex Crichton (May 13 2024 at 17:50):

the default is 1000 which is the 6T reservation, but I'm curious if, for example, 100 works or even 10

view this post on Zulip Mats Brorsson (May 13 2024 at 17:54):

It works for N up to 56 but fails at 57

view this post on Zulip Lann Martin (May 13 2024 at 17:55):

ah yes the well-known 360GB limit :neutral:

view this post on Zulip Mats Brorsson (May 13 2024 at 17:55):

I do not have access to my RISCV-board from home (forgot its IP-address :-) so this is for the Jetson nano board

view this post on Zulip Alex Crichton (May 13 2024 at 17:56):

interesting, thanks for testing!

view this post on Zulip Lann Martin (May 13 2024 at 17:58):

there is apparently a 39-bit virtual address configuration for aarch64

view this post on Zulip Mats Brorsson (May 13 2024 at 18:43):

Lann Martin said:

there is apparently a 39-bit virtual address configuration for aarch64

Indeed, and for RISCV64 it seems to be 48 or 39 bits. https://www.kernel.org/doc/html/v6.4/riscv/vm-layout.html

Is there a way to programmatically find this out?

Also, what can happen if I do not use the pooling-allocator? Is it a performance or a correctness issue?

view this post on Zulip Chris Fallin (May 13 2024 at 18:45):

Pooling allocator vs. not is purely a performance question: every feature is supported with the "on-demand" allocator as well

view this post on Zulip Mats Brorsson (May 13 2024 at 18:52):

Maybe this from /proc/meminfo:
Aarch64: VmallocTotal: 263061440 kB
x86_64: VmallocTotal: 34359738367 kB

view this post on Zulip Lann Martin (May 13 2024 at 19:09):

Hmm well I'm not sure exactly how that relates to the 360GB limit you found in practice, but I guess its...suspiciously close?

view this post on Zulip Lann Martin (May 13 2024 at 19:30):

Try: grep 'address sizes' /proc/cpuinfo | uniq

view this post on Zulip Alex Crichton (May 13 2024 at 20:50):

@Mats Brorsson would you be able to confirm that https://github.com/bytecodealliance/wasmtime/pull/8610 works on the systems that wasmtime serve doesn't currently work on?

This commit aims to address #8607 by dynamically determining whether the pooling allocator should be used rather than unconditionally using it. It looks like some systems don't have enough virtual ...

view this post on Zulip Alex Crichton (May 13 2024 at 20:50):

I'd like to double-check before merging that to confirm

view this post on Zulip Mats Brorsson (May 14 2024 at 04:40):

Lann Martin said:

Try: grep 'address sizes' /proc/cpuinfo | uniq

That doesn't work universally. For instance on my aarch64-board, it only shows this for each processor:

processor       : 0
model name      : ARMv8 Processor rev 1 (v8l)
BogoMIPS        : 38.40
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x1
CPU part        : 0xd07
CPU revision    : 1

view this post on Zulip Mats Brorsson (May 14 2024 at 13:17):

Alex Crichton said:

Mats Brorsson would you be able to confirm that https://github.com/bytecodealliance/wasmtime/pull/8610 works on the systems that wasmtime serve doesn't currently work on?

I can confirm that this fix works on both the Aarch64 board and the RISCV64 board that I use. However, I will try to see how spin (and the spin shim for containerd) uses the library to see if a similar fix can be introduced there.

view this post on Zulip Lann Martin (May 14 2024 at 13:20):

Thanks! That ought to fix https://github.com/fermyon/spin/issues/2343

There are (at least) two environments where the Wasmtime pooling allocator is known to fail with Spin's default settings: QEMU: #1785 Systems with <2GB (?) RAM: #2119 Under spin up we should try to...

view this post on Zulip Lann Martin (May 14 2024 at 13:20):

I wasn't sure what "some operation known to fail with pooling" before, but it seems like that PR does the trick

view this post on Zulip Mats Brorsson (May 14 2024 at 13:22):

However, I am not sure that when the serve option of wasmtime is used from the library instead of the cli goes through this same code?

view this post on Zulip Lann Martin (May 14 2024 at 13:22):

wasmtime serve is not a library feature; Spin is an entirely separate implementation of the same basic idea

view this post on Zulip Mats Brorsson (May 14 2024 at 13:23):

ok, but I get the same error with the 6T memory allocation in wasmtime so the fix may need to be somewhere else then, or in an additional place.

view this post on Zulip Lann Martin (May 14 2024 at 13:26):

Ah, yes, the PR above only fixes wasmtime serve. It would be possible to add "should this host use pooling" auto-detection to the wasmtime lib, but that may be a bit more "opinionated" than the lib usually is.

view this post on Zulip Lann Martin (May 14 2024 at 13:28):

It would probably be worth at least a note somewhere in the pooling docs


Last updated: Dec 23 2024 at 14:03 UTC