Stream: git-wasmtime

Topic: wasmtime / Issue #1501 Setting vmemoryuse to 3G causes OO...


view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 13:56):

plicease opened Issue #1501:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs
fn main() {
  println!("Hello, world!");
}
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm
Error: failed to run main module `hello.wasm`

Caused by:
    0: failed to instantiate "hello.wasm"
    1: Insufficient resources: Cannot allocate memory (os error 12)

I don't expect such a high virtual memory limit to cause an OOM on such a small program.

This is the using the 0.15.0 tarball, but I have been able to reproduce it also using the 0.15.0 c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22

I was able to reproduce on my Debian Linux x864_64 box:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version
rustc 1.42.0 (b8cedc004 2020-03-09)
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version
9.12

cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that it is matters

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 13:56):

plicease labeled Issue #1501:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs
fn main() {
  println!("Hello, world!");
}
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm
Error: failed to run main module `hello.wasm`

Caused by:
    0: failed to instantiate "hello.wasm"
    1: Insufficient resources: Cannot allocate memory (os error 12)

I don't expect such a high virtual memory limit to cause an OOM on such a small program.

This is the using the 0.15.0 tarball, but I have been able to reproduce it also using the 0.15.0 c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22

I was able to reproduce on my Debian Linux x864_64 box:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version
rustc 1.42.0 (b8cedc004 2020-03-09)
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version
9.12

cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that it is matters

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 13:56):

plicease edited Issue #1501:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs
fn main() {
  println!("Hello, world!");
}
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm
Error: failed to run main module `hello.wasm`

Caused by:
    0: failed to instantiate "hello.wasm"
    1: Insufficient resources: Cannot allocate memory (os error 12)

I don't expect such a high virtual memory limit to cause an OOM on such a small program.

This is the using the 0.15.0 tarball, but I have been able to reproduce it also using the 0.15.0 c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22

I was able to reproduce on my Debian Linux x864_64 box:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version
rustc 1.42.0 (b8cedc004 2020-03-09)
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version
9.12

cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that this matters, but including for completeness.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 14:05):

bjorn3 commented on Issue #1501:

I think ulimit -v expects a byte value, not a kilobyte value as you expected. For example running ulimit -v 100 crashes my shell, while it only consumes ~15kb of memory.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 14:06):

bjorn3 edited a comment on Issue #1501:

I think ulimit -v expects a byte value, not a kilobyte value as you expected. For example running ulimit -v 100 crashes my shell, while it only consumes ~15kb of memory. This means that you limited Wasmtime to 3MiB not 3GiB.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 14:27):

plicease commented on Issue #1501:

The interface seems to vary from shell to shell, but both bash and tcsh are reporting kbytes for me:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14549
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14549
virtual memory          (kbytes, -v) 3145728
file locks                      (-x) unlimited
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ tcsh
muse% limit
cputime      unlimited
filesize     unlimited
datasize     unlimited
stacksize    8192 kbytes
coredumpsize 0 kbytes
memoryuse    unlimited
vmemoryuse   3145728 kbytes
descriptors  1024
memorylocked 64 kbytes
maxproc      14549
maxlocks     unlimited
maxsignal    14549
maxmessage   819200
maxnice      0
maxrtprio    0
maxrttime    unlimited

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 14:32):

plicease commented on Issue #1501:

The cpantesters were using BSD::Resource::RLIMIT_AS set to 3*1024*1024*1024

I am not tbh familiar with BSD::Resource module, but I am fairly confident that they are not setting the limit to 3MB, they test tones of Perl modules every day and the results wouldn't be very useful with such a low limit.

https://github.com/eserte/srezic-misc/blob/127d3e2c7dc58ed15aa925a199c4ed004936fd13/scripts/cpan_smoke_modules#L1599
https://github.com/perlwasm/Wasm/issues/22#issuecomment-612620036

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 14:36):

bjorn3 commented on Issue #1501:

I experimentally tried to run ulimit -v in bash and then running ps to report the amount of memory used by ps. It worked fine when I set it to exactly the amount of VSZ used by ps (7560 bytes), but it gave an OOM when setting it exactly 1 less than that. This confirms that it is measured in bytes, not kilobytes.

310241024*1024

That is 3GiB, not 3MiB

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 14:52):

plicease commented on Issue #1501:

My man page documents that ps indicates VSZ is in kilobytes:

vsz         VSZ       virtual memory size of the process in KiB (1024-byte units).  Device mappings are currently excluded; this is subject to change.  (alias vsize).
``

Maybe yours is different?  My bash man page also says that ulimit -v is in 1024 byte increments:
          If limit is given, and the -a option is not used, limit is the new value of the specified resource.  If no option is given, then -f is assumed.  Values are in  1024-byte
          increments, except for -t, which is in seconds; -p, which is in units of 512-byte blocks; -P, -T, -b, -k, -n, and -u, which are unscaled values; and, when in Posix mode,
          -c and -f, which are in 512-byte increments.  The return status is 0 unless an invalid option or argument is supplied, or an error occurs while setting a new limit.
The outputs of `ulimit -a` and tcsh `limit` do also seem to indicate kbyes.
~~~

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 15:08):

plicease edited a comment on Issue #1501:

My man page documents that ps indicates VSZ is in kilobytes:

vsz         VSZ       virtual memory size of the process in KiB (1024-byte units).  Device mappings are currently excluded; this is subject to change.  (alias vsize).

Maybe yours is different? My bash man page also says that ulimit -v is in 1024 byte increments:

          If limit is given, and the -a option is not used, limit is the new value of the specified resource.  If no option is given, then -f is assumed.  Values are in  1024-byte
          increments, except for -t, which is in seconds; -p, which is in units of 512-byte blocks; -P, -T, -b, -k, -n, and -u, which are unscaled values; and, when in Posix mode,
          -c and -f, which are in 512-byte increments.  The return status is 0 unless an invalid option or argument is supplied, or an error occurs while setting a new limit.
The outputs of `ulimit -a` and tcsh `limit` do also seem to indicate kbyes.
~~~

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 15:09):

plicease edited a comment on Issue #1501:

My man page documents that ps indicates VSZ is in kilobytes:

vsz         VSZ       virtual memory size of the process in KiB (1024-byte units).  Device mappings are currently excluded; this is subject to change.  (alias vsize).

Maybe yours is different? My bash man page also says that ulimit -v is in 1024 byte increments:

              If limit is given, and the -a option is not used, limit is the new value of the specified resource.  If no option is given, then -f is assumed.  Values are in  1024-byte
              increments, except for -t, which is in seconds; -p, which is in units of 512-byte blocks; -P, -T, -b, -k, -n, and -u, which are unscaled values; and, when in Posix mode,
              -c and -f, which are in 512-byte increments.  The return status is 0 unless an invalid option or argument is supplied, or an error occurs while setting a new limit.

The outputs of ulimit -a and tcsh limit do also seem to indicate kbyes.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 15:12):

tschneidereit commented on Issue #1501:

IIUC, this is probably caused by an optimization Wasmtime/Cranelift, just as some other WebAssembly runtimes, employs: instead of doing explicit bounds checks for all memory accesses, the runtime reserves 8GB of guard pages before and after a Wasm instance's memory, which are marked as PROT_NONE, triggering a trap on access, which is then handled by the runtime. For more details, see this write-up an implementation plan for V8.

CC @sunfishcode, who knows infinitely more about this than me :)

As an aside, Perlwasm is very exciting! :tada:

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 15:31):

bjorn3 commented on Issue #1501:

My man page documents that ps indicates VSZ is in kilobytes:

Ok, my bad.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 12 2020 at 17:48):

plicease commented on Issue #1501:

IIUC, this is probably caused by an optimization Wasmtime/Cranelift, just as some other WebAssembly runtimes, employs: instead of doing explicit bounds checks for all memory accesses, the runtime reserves 8GB of guard pages before and after a Wasm instance's memory, which are marked as PROT_NONE, triggering a trap on access, which is then handled by the runtime. For more details, see this write-up an implementation plan for V8.

Thanks, this is very helpful. It looks like ulimit -v (and the equivalent which cpantesters use) limits the virtual address space as a way to restrain an out of control process from consuming swap (I am not sure but I don't think ulimit -m works in practice), but includes in its accounting PROT_NONE pages which don't actually consume any resources, at least not the way Wasmtime is using them. This is a deliberate, and probably reasonable design decision, though I am not sure how to configure cpantesters to get the memory limits they need.

As an aside, Perlwasm is very exciting! :tada:

As an aside, I am pretty excited about Wasmtime, and bringing WebAssembly into the Perl ecosystem.

view this post on Zulip Wasmtime GitHub notifications bot (Apr 15 2020 at 14:48):

alexcrichton commented on Issue #1501:

I've posted https://github.com/bytecodealliance/wasmtime/pull/1513 which should allow tuning, at runtime, the allocation characteristics of wasm memories. Notably @plicease you'll be able to use the C API to say that memories should not be allocated as 6GB of virtual memory, but rather you can configure memories to be precisely allocated with zero extra overhead. This'll come at the cost of runtime performance of the JIT code, but should help you get tests passing in CI!

view this post on Zulip Wasmtime GitHub notifications bot (Apr 15 2020 at 15:14):

plicease commented on Issue #1501:

I've posted #1513 which should allow tuning, at runtime, the allocation characteristics of wasm memories. Notably @plicease you'll be able to use the C API to say that memories should _not_ be allocated as 6GB of virtual memory, but rather you can configure memories to be precisely allocated with zero extra overhead. This'll come at the cost of runtime performance of the JIT code, but should help you get tests passing in CI!

Very cool, I look forward to testing this :)

view this post on Zulip Wasmtime GitHub notifications bot (Apr 30 2020 at 00:10):

sunfishcode closed Issue #1501:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs
fn main() {
  println!("Hello, world!");
}
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm
Error: failed to run main module `hello.wasm`

Caused by:
    0: failed to instantiate "hello.wasm"
    1: Insufficient resources: Cannot allocate memory (os error 12)

I don't expect such a high virtual memory limit to cause an OOM on such a small program.

This is the using the 0.15.0 tarball, but I have been able to reproduce it also using the 0.15.0 c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22

I was able to reproduce on my Debian Linux x864_64 box:

ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version
rustc 1.42.0 (b8cedc004 2020-03-09)
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version
9.12

cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that this matters, but including for completeness.

view this post on Zulip Wasmtime GitHub notifications bot (May 05 2020 at 19:12):

plicease commented on Issue #1501:

Looks like wasmtime_config_static_memory_guard_size and wasmtime_config_dynamic_memory_guard_size are missing the _set suffix that the other config setters have.


Last updated: Oct 23 2024 at 20:03 UTC