plicease opened Issue #1501:
- What are the steps to reproduce the issue?
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728 ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs fn main() { println!("Hello, world!"); } ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm Error: failed to run main module `hello.wasm` Caused by: 0: failed to instantiate "hello.wasm" 1: Insufficient resources: Cannot allocate memory (os error 12)
- What do you expect to happen? What does actually happen? Does it panic, and
if so, with which assertion?I don't expect such a high virtual memory limit to cause an OOM on such a small program.
- Which Wasmtime version / commit hash / branch are you using?
This is the using the
0.15.0
tarball, but I have been able to reproduce it also using the0.15.0
c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22
- If relevant, can you include some extra information about your environment?
(Rust version, operating system, architecture...)I was able to reproduce on my Debian Linux x864_64 box:
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version rustc 1.42.0 (b8cedc004 2020-03-09) ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version 9.12cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that it is matters
plicease labeled Issue #1501:
- What are the steps to reproduce the issue?
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728 ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs fn main() { println!("Hello, world!"); } ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm Error: failed to run main module `hello.wasm` Caused by: 0: failed to instantiate "hello.wasm" 1: Insufficient resources: Cannot allocate memory (os error 12)
- What do you expect to happen? What does actually happen? Does it panic, and
if so, with which assertion?I don't expect such a high virtual memory limit to cause an OOM on such a small program.
- Which Wasmtime version / commit hash / branch are you using?
This is the using the
0.15.0
tarball, but I have been able to reproduce it also using the0.15.0
c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22
- If relevant, can you include some extra information about your environment?
(Rust version, operating system, architecture...)I was able to reproduce on my Debian Linux x864_64 box:
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version rustc 1.42.0 (b8cedc004 2020-03-09) ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version 9.12cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that it is matters
plicease edited Issue #1501:
- What are the steps to reproduce the issue?
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728 ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs fn main() { println!("Hello, world!"); } ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm Error: failed to run main module `hello.wasm` Caused by: 0: failed to instantiate "hello.wasm" 1: Insufficient resources: Cannot allocate memory (os error 12)
- What do you expect to happen? What does actually happen? Does it panic, and
if so, with which assertion?I don't expect such a high virtual memory limit to cause an OOM on such a small program.
- Which Wasmtime version / commit hash / branch are you using?
This is the using the
0.15.0
tarball, but I have been able to reproduce it also using the0.15.0
c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22
- If relevant, can you include some extra information about your environment?
(Rust version, operating system, architecture...)I was able to reproduce on my Debian Linux x864_64 box:
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version rustc 1.42.0 (b8cedc004 2020-03-09) ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version 9.12cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that this matters, but including for completeness.
bjorn3 commented on Issue #1501:
I think
ulimit -v
expects a byte value, not a kilobyte value as you expected. For example runningulimit -v 100
crashes my shell, while it only consumes ~15kb of memory.
bjorn3 edited a comment on Issue #1501:
I think
ulimit -v
expects a byte value, not a kilobyte value as you expected. For example runningulimit -v 100
crashes my shell, while it only consumes ~15kb of memory. This means that you limited Wasmtime to 3MiB not 3GiB.
plicease commented on Issue #1501:
The interface seems to vary from shell to shell, but both bash and tcsh are reporting kbytes for me:
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14549 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 14549 virtual memory (kbytes, -v) 3145728 file locks (-x) unlimited ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ tcsh muse% limit cputime unlimited filesize unlimited datasize unlimited stacksize 8192 kbytes coredumpsize 0 kbytes memoryuse unlimited vmemoryuse 3145728 kbytes descriptors 1024 memorylocked 64 kbytes maxproc 14549 maxlocks unlimited maxsignal 14549 maxmessage 819200 maxnice 0 maxrtprio 0 maxrttime unlimited
plicease commented on Issue #1501:
The cpantesters were using
BSD::Resource::RLIMIT_AS
set to3*1024*1024*1024
I am not tbh familiar with
BSD::Resource
module, but I am fairly confident that they are not setting the limit to 3MB, they test tones of Perl modules every day and the results wouldn't be very useful with such a low limit.https://github.com/eserte/srezic-misc/blob/127d3e2c7dc58ed15aa925a199c4ed004936fd13/scripts/cpan_smoke_modules#L1599
https://github.com/perlwasm/Wasm/issues/22#issuecomment-612620036
bjorn3 commented on Issue #1501:
I experimentally tried to run
ulimit -v
in bash and then runningps
to report the amount of memory used byps
. It worked fine when I set it to exactly the amount ofVSZ
used byps
(7560 bytes), but it gave an OOM when setting it exactly 1 less than that. This confirms that it is measured in bytes, not kilobytes.310241024*1024
That is 3GiB, not 3MiB
plicease commented on Issue #1501:
My man page documents that ps indicates VSZ is in kilobytes:
vsz VSZ virtual memory size of the process in KiB (1024-byte units). Device mappings are currently excluded; this is subject to change. (alias vsize). `` Maybe yours is different? My bash man page also says that ulimit -v is in 1024 byte increments:If limit is given, and the -a option is not used, limit is the new value of the specified resource. If no option is given, then -f is assumed. Values are in 1024-byte increments, except for -t, which is in seconds; -p, which is in units of 512-byte blocks; -P, -T, -b, -k, -n, and -u, which are unscaled values; and, when in Posix mode, -c and -f, which are in 512-byte increments. The return status is 0 unless an invalid option or argument is supplied, or an error occurs while setting a new limit.The outputs of `ulimit -a` and tcsh `limit` do also seem to indicate kbyes. ~~~
plicease edited a comment on Issue #1501:
My man page documents that ps indicates VSZ is in kilobytes:
vsz VSZ virtual memory size of the process in KiB (1024-byte units). Device mappings are currently excluded; this is subject to change. (alias vsize).Maybe yours is different? My bash man page also says that ulimit -v is in 1024 byte increments:
If limit is given, and the -a option is not used, limit is the new value of the specified resource. If no option is given, then -f is assumed. Values are in 1024-byte increments, except for -t, which is in seconds; -p, which is in units of 512-byte blocks; -P, -T, -b, -k, -n, and -u, which are unscaled values; and, when in Posix mode, -c and -f, which are in 512-byte increments. The return status is 0 unless an invalid option or argument is supplied, or an error occurs while setting a new limit.The outputs of `ulimit -a` and tcsh `limit` do also seem to indicate kbyes. ~~~
plicease edited a comment on Issue #1501:
My man page documents that ps indicates VSZ is in kilobytes:
vsz VSZ virtual memory size of the process in KiB (1024-byte units). Device mappings are currently excluded; this is subject to change. (alias vsize).Maybe yours is different? My bash man page also says that ulimit -v is in 1024 byte increments:
If limit is given, and the -a option is not used, limit is the new value of the specified resource. If no option is given, then -f is assumed. Values are in 1024-byte increments, except for -t, which is in seconds; -p, which is in units of 512-byte blocks; -P, -T, -b, -k, -n, and -u, which are unscaled values; and, when in Posix mode, -c and -f, which are in 512-byte increments. The return status is 0 unless an invalid option or argument is supplied, or an error occurs while setting a new limit.The outputs of
ulimit -a
and tcshlimit
do also seem to indicate kbyes.
tschneidereit commented on Issue #1501:
IIUC, this is probably caused by an optimization Wasmtime/Cranelift, just as some other WebAssembly runtimes, employs: instead of doing explicit bounds checks for all memory accesses, the runtime reserves 8GB of guard pages before and after a Wasm instance's memory, which are marked as
PROT_NONE
, triggering a trap on access, which is then handled by the runtime. For more details, see this write-up an implementation plan for V8.CC @sunfishcode, who knows infinitely more about this than me :)
As an aside, Perlwasm is very exciting! :tada:
bjorn3 commented on Issue #1501:
My man page documents that ps indicates VSZ is in kilobytes:
Ok, my bad.
plicease commented on Issue #1501:
IIUC, this is probably caused by an optimization Wasmtime/Cranelift, just as some other WebAssembly runtimes, employs: instead of doing explicit bounds checks for all memory accesses, the runtime reserves 8GB of guard pages before and after a Wasm instance's memory, which are marked as
PROT_NONE
, triggering a trap on access, which is then handled by the runtime. For more details, see this write-up an implementation plan for V8.Thanks, this is very helpful. It looks like
ulimit -v
(and the equivalent which cpantesters use) limits the virtual address space as a way to restrain an out of control process from consuming swap (I am not sure but I don't thinkulimit -m
works in practice), but includes in its accountingPROT_NONE
pages which don't actually consume any resources, at least not the way Wasmtime is using them. This is a deliberate, and probably reasonable design decision, though I am not sure how to configure cpantesters to get the memory limits they need.As an aside, Perlwasm is very exciting! :tada:
As an aside, I am pretty excited about Wasmtime, and bringing WebAssembly into the Perl ecosystem.
alexcrichton commented on Issue #1501:
I've posted https://github.com/bytecodealliance/wasmtime/pull/1513 which should allow tuning, at runtime, the allocation characteristics of wasm memories. Notably @plicease you'll be able to use the C API to say that memories should not be allocated as 6GB of virtual memory, but rather you can configure memories to be precisely allocated with zero extra overhead. This'll come at the cost of runtime performance of the JIT code, but should help you get tests passing in CI!
plicease commented on Issue #1501:
I've posted #1513 which should allow tuning, at runtime, the allocation characteristics of wasm memories. Notably @plicease you'll be able to use the C API to say that memories should _not_ be allocated as 6GB of virtual memory, but rather you can configure memories to be precisely allocated with zero extra overhead. This'll come at the cost of runtime performance of the JIT code, but should help you get tests passing in CI!
Very cool, I look forward to testing this :)
sunfishcode closed Issue #1501:
- What are the steps to reproduce the issue?
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ulimit -v 3145728 ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat hello.rs fn main() { println!("Hello, world!"); } ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc hello.rs --target wasm32-wasi ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ ./wasmtime hello.wasm Error: failed to run main module `hello.wasm` Caused by: 0: failed to instantiate "hello.wasm" 1: Insufficient resources: Cannot allocate memory (os error 12)
- What do you expect to happen? What does actually happen? Does it panic, and
if so, with which assertion?I don't expect such a high virtual memory limit to cause an OOM on such a small program.
- Which Wasmtime version / commit hash / branch are you using?
This is the using the
0.15.0
tarball, but I have been able to reproduce it also using the0.15.0
c-api via Perl/FFI, see https://github.com/perlwasm/Wasm/issues/22
- If relevant, can you include some extra information about your environment?
(Rust version, operating system, architecture...)I was able to reproduce on my Debian Linux x864_64 box:
ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ rustc --version rustc 1.42.0 (b8cedc004 2020-03-09) ollisg@muse:~/test/wasmtime/wasmtime-v0.15.0-x86_64-linux$ cat /etc/debian_version 9.12
cpantesters use a variety of platforms, with the binary wasmtime tarball, so I am not sure that this matters, but including for completeness.
plicease commented on Issue #1501:
Looks like
wasmtime_config_static_memory_guard_size
andwasmtime_config_dynamic_memory_guard_size
are missing the _set suffix that the other config setters have.
Last updated: Nov 22 2024 at 16:03 UTC