Stream: git-wasmtime

Topic: wasmtime / Issue #1897 C API: expose wasmtime_linker_get_...


view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 06:48):

github-actions[bot] commented on Issue #1897:

Subscribe to Label Action

cc @peterhuene

<details>
This issue or pull request has been labeled: "wasmtime:c-api"

Thus the following users have been cc'd because of the following labels:

To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.

Learn more.
</details>

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 18:11):

thibaultcha commented on Issue #1897:

Hi there!

Yes, I have been using this in my embedding and its been working fine, and returning the Extern as expected.

However, there has been something bugging me, and I am not sure if it is related or not. I am observing wasmtime_func_call() leaking some memory every time it is invoked. Details:

I've been using this new API like so:

wasm_name_t        mod_name,  func_name;
wasm_extern_t     *func_extern;
wasm_func_t       *func;

/* ... */

error = wasmtime_linker_get_one_by_name(linker, &mod_name, &func_name, &func_extern);
if (error) {
    /* ... */
}

func = wasm_extern_as_func(func_extern);

error = wasmtime_func_call(func, NULL, 0, NULL, 0, &trap);

wasm_extern_delete(func_extern);

if (error || trap) {
    /* ... */
}

In a gdb session, I observe 1152Kb being allocated here and never freed. This amount of memory sounded a lot like the default wasm stack to me, but even when I update the stack size in my Engine's config, I still see the same 1152Kb allocations.

Am I missing something here? I'm not sure if this is caused by a mistake in my use of wasmtime_func_call() or in this new C API...

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 18:48):

thibaultcha edited a comment on Issue #1897:

Hi there!

Yes, I have been using this in my embedding and its been working fine, and returning the Extern as expected.

However, there has been something bugging me, and I am not sure if it is related or not. I am observing wasmtime_func_call() leaking some memory every time it is invoked. Details:

I've been using this new API like so:

// static_memory_maximum_size = 0 for testing

wasm_name_t        mod_name,  func_name;
wasm_extern_t     *func_extern;
wasm_func_t       *func;

/* ... */

error = wasmtime_linker_get_one_by_name(linker, &mod_name, &func_name, &func_extern);
if (error) {
    /* ... */
}

func = wasm_extern_as_func(func_extern);

error = wasmtime_func_call(func, NULL, 0, NULL, 0, &trap);

wasm_extern_delete(func_extern);

if (error || trap) {
    /* ... */
}

In a gdb session, I observe 1152Kb being allocated here and never freed. This amount of memory sounded a lot like the default wasm stack to me, but even when I update the stack size in my Engine's config, I still see the same 1152Kb allocations.

Am I missing something here? I'm not sure if this is caused by a mistake in my use of wasmtime_func_call() or in this new C API...

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 18:49):

thibaultcha edited a comment on Issue #1897:

Hi there!

Yes, I have been using this in my embedding and its been working fine, and returning the Extern as expected.

However, there has been something bugging me, and I am not sure if it is related or not. I am observing wasmtime_func_call() leaking some memory every time it is invoked. Details:

I've been using this new API like so:

// static_memory_maximum_size = 0 for testing

wasm_name_t        mod_name,  func_name;
wasm_extern_t     *func_extern;
wasm_func_t       *func;

/* ... */

error = wasmtime_linker_get_one_by_name(linker, &mod_name, &func_name, &func_extern);
if (error) {
    /* ... */
}

func = wasm_extern_as_func(func_extern); // this function is a nop

error = wasmtime_func_call(func, NULL, 0, NULL, 0, &trap);

wasm_extern_delete(func_extern);

if (error || trap) {
    /* ... */
}

In a gdb session, I observe 1152Kb being allocated here and never freed. This amount of memory sounded a lot like the default wasm stack to me, but even when I update the stack size in my Engine's config, I still see the same 1152Kb allocations.

Am I missing something here? I'm not sure if this is caused by a mistake in my use of wasmtime_func_call() or in this new C API...

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 19:08):

thibaultcha edited a comment on Issue #1897:

Hi there!

Yes, I have been using this in my embedding and its been working fine, and returning the Extern as expected.

However, there has been something bugging me, and I am not sure if it is related or not. I am observing wasmtime_func_call() leaking some memory every time it is invoked. Details:

I've been using this new API like so:

// static_memory_maximum_size = 0 for testing

wasm_name_t        mod_name,  func_name;
wasm_extern_t     *func_extern;
wasm_func_t       *func;

/* ... */

error = wasmtime_linker_get_one_by_name(linker, &mod_name, &func_name, &func_extern);
if (error) {
    /* ... */
}

func = wasm_extern_as_func(func_extern); // this function is a nop

error = wasmtime_func_call(func, NULL, 0, NULL, 0, &trap);

wasm_extern_delete(func_extern);

if (error || trap) {
    /* ... */
}

In a gdb session, I observe 1152Kb being allocated here and never freed. This amount of memory sounded a lot like the default wasm stack to me, but even when I update the stack size in my Engine's config, I still see the same 1152Kb allocations. When I remove the static_memory_maximum_size = 0 setting, I see the 6Gb of static memory being allocated and never freed for every call. Aren't these Externs in the Linker already backed by an underlying Instance?

Am I missing something here? I'm not sure if this is caused by a mistake in my use of wasmtime_func_call() or in this new C API...

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 19:34):

thibaultcha commented on Issue #1897:

It does make it hard to use indeed! I found out what the issue is, although I cannot explain quite why without digging into the Linker. Said Linker does not have WASI defined, and the function was declared in a module as such:

#[no_mangle]
pub fn _start() {
    // nop
}

Turns out that changing the name from _start to, e.g. my_func does not trigger the allocation/leak issue anymore. I recall that _start isn't really supposed to be part of a Reactor module (am I wrong?), so I don't mind much renaming it; nonetheless, the behaviour seems to highlight and underlying, problematic issue, isn't it?

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 19:57):

thibaultcha edited a comment on Issue #1897:

It does make it hard to use indeed! I found out what the issue is, although I cannot explain quite why without digging into the Linker. Said Linker does not have WASI defined, and the function was declared in a module as such:

#[no_mangle]
pub fn _start() {
    // nop
}

Turns out that changing the name from _start to, e.g. my_func does not trigger the allocation/leak issue anymore. I recall that _start isn't really supposed to be part of a Reactor module (am I wrong?), so I don't mind much renaming it; nonetheless, the behaviour seems to highlight an underlying, problematic issue, doesn’t it?

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 21:08):

thibaultcha commented on Issue #1897:

One more somewhat related question on memory management via the C API: should I expect wasmtime_linker_delete()/wasm_store_delete()/wasm_engine_delete() to properly free all of the underlying instances of my Modules?

Valgrind seems to have a lot of complaints about me calling these, below is an excerpt of the Valgrind output when calling the above 3 functions on my single Linker/Store/Engine during my application's exit.:

==3218139== 488 bytes in 1 blocks are still reachable in loss record 66 of 97
==3218139==    at 0x483A809: malloc (vg_replace_malloc.c:307)
==3218139==    by 0x5048BBB: alloc::alloc::alloc (alloc.rs:80)
==3218139==    by 0x5048FF3: <alloc::alloc::Global as core::alloc::AllocRef>::alloc (alloc.rs:174)
==3218139==    by 0x5048B14: alloc::alloc::exchange_malloc (alloc.rs:268)
==3218139==    by 0x507420D: alloc::sync::Arc<T>::new (sync.rs:323)
==3218139==    by 0x5055E7A: wasmtime_jit::instantiate::CompiledModule::new (instantiate.rs:145)
==3218139==    by 0x4FB2663: wasmtime::module::Module::compile (module.rs:304)
==3218139==    by 0x4FB249F: wasmtime::module::Module::from_binary_unchecked (module.rs:275)
==3218139==    by 0x4FB2469: wasmtime::module::Module::from_binary (module.rs:244)
==3218139==    by 0x4B1BEFD: wasmtime_module_new (module.rs:56)
==3218139==    by 0x4DA9B1: [my_embedding] ([my_embedding].c:298)
==3218139==
==3218139== 640 bytes in 8 blocks are still reachable in loss record 72 of 97
==3218139==    at 0x483A809: malloc (vg_replace_malloc.c:307)
==3218139==    by 0x5EDC62F: alloc (alloc.rs:80)
==3218139==    by 0x5EDC62F: alloc (alloc.rs:174)
==3218139==    by 0x5EDC62F: exchange_malloc (alloc.rs:268)
==3218139==    by 0x5EDC62F: new<std::thread::Inner> (sync.rs:323)
==3218139==    by 0x5EDC62F: std::thread::Thread::new (mod.rs:1141)
==3218139==    by 0x598D95D: std::thread::Builder::spawn_unchecked (mod.rs:462)
==3218139==    by 0x598E2CB: std::thread::Builder::spawn (mod.rs:386)
==3218139==    by 0x596FB58: <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn (registry.rs:101)
==3218139==    by 0x5970862: rayon_core::registry::Registry::new (registry.rs:257)
==3218139==    by 0x596FDCD: rayon_core::registry::global_registry::{{closure}} (registry.rs:168)
==3218139==    by 0x596FF59: rayon_core::registry::set_global_registry::{{closure}} (registry.rs:194)
==3218139==    by 0x5973C23: std::sync::once::Once::call_once::{{closure}} (once.rs:264)
==3218139==    by 0x5EE6F47: std::sync::once::Once::call_inner (once.rs:416)
==3218139==    by 0x5973B50: std::sync::once::Once::call_once (once.rs:264)
==3218139==    by 0x596FE9A: rayon_core::registry::set_global_registry (registry.rs:193)
==3218139==
==3218139== 704 bytes in 1 blocks are indirectly lost in loss record 73 of 97
==3218139==    at 0x483A809: malloc (vg_replace_malloc.c:307)
==3218139==    by 0x4B3BA4B: alloc::alloc::alloc (alloc.rs:80)
==3218139==    by 0x4B3BE83: <alloc::alloc::Global as core::alloc::AllocRef>::alloc (alloc.rs:174)
==3218139==    by 0x4AE9B76: alloc::raw_vec::RawVec<T,A>::allocate_in (raw_vec.rs:152)
==3218139==    by 0x4AEDBDB: alloc::raw_vec::RawVec<T,A>::with_capacity_in (raw_vec.rs:135)
==3218139==    by 0x4AE62FD: alloc::raw_vec::RawVec<T>::with_capacity (raw_vec.rs:92)
==3218139==    by 0x4B4075D: alloc::vec::Vec<T>::with_capacity (vec.rs:358)
==3218139==    by 0x4B4DC63: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T,I>>::from_iter (vec.rs:2073)
==3218139==    by 0x4B5146B: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter (vec.rs:1981)
==3218139==    by 0x4AFF6EF: core::iter::traits::iterator::Iterator::collect (iterator.rs:1660)
==3218139==    by 0x4B3AEB8: wasmtime::module::wasmtime_module_new::{{closure}} (module.rs:61)
==3218139==    by 0x4B2858D: wasmtime::error::handle_result (error.rs:30)

...

view this post on Zulip Wasmtime GitHub notifications bot (Jun 18 2020 at 21:09):

thibaultcha edited a comment on Issue #1897:

One more somewhat related question on memory management via the C API: should I expect wasmtime_linker_delete()/wasm_store_delete()/wasm_engine_delete() to properly free all of the underlying instances of my Modules?

Valgrind seems to have a lot of complaints about me calling these, below is an excerpt of the Valgrind output when calling the above 3 functions on my single Linker/Store/Engine during my application's exit:

==3218139== 488 bytes in 1 blocks are still reachable in loss record 66 of 97
==3218139==    at 0x483A809: malloc (vg_replace_malloc.c:307)
==3218139==    by 0x5048BBB: alloc::alloc::alloc (alloc.rs:80)
==3218139==    by 0x5048FF3: <alloc::alloc::Global as core::alloc::AllocRef>::alloc (alloc.rs:174)
==3218139==    by 0x5048B14: alloc::alloc::exchange_malloc (alloc.rs:268)
==3218139==    by 0x507420D: alloc::sync::Arc<T>::new (sync.rs:323)
==3218139==    by 0x5055E7A: wasmtime_jit::instantiate::CompiledModule::new (instantiate.rs:145)
==3218139==    by 0x4FB2663: wasmtime::module::Module::compile (module.rs:304)
==3218139==    by 0x4FB249F: wasmtime::module::Module::from_binary_unchecked (module.rs:275)
==3218139==    by 0x4FB2469: wasmtime::module::Module::from_binary (module.rs:244)
==3218139==    by 0x4B1BEFD: wasmtime_module_new (module.rs:56)
==3218139==    by 0x4DA9B1: [my_embedding] ([my_embedding].c:298)
==3218139==
==3218139== 640 bytes in 8 blocks are still reachable in loss record 72 of 97
==3218139==    at 0x483A809: malloc (vg_replace_malloc.c:307)
==3218139==    by 0x5EDC62F: alloc (alloc.rs:80)
==3218139==    by 0x5EDC62F: alloc (alloc.rs:174)
==3218139==    by 0x5EDC62F: exchange_malloc (alloc.rs:268)
==3218139==    by 0x5EDC62F: new<std::thread::Inner> (sync.rs:323)
==3218139==    by 0x5EDC62F: std::thread::Thread::new (mod.rs:1141)
==3218139==    by 0x598D95D: std::thread::Builder::spawn_unchecked (mod.rs:462)
==3218139==    by 0x598E2CB: std::thread::Builder::spawn (mod.rs:386)
==3218139==    by 0x596FB58: <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn (registry.rs:101)
==3218139==    by 0x5970862: rayon_core::registry::Registry::new (registry.rs:257)
==3218139==    by 0x596FDCD: rayon_core::registry::global_registry::{{closure}} (registry.rs:168)
==3218139==    by 0x596FF59: rayon_core::registry::set_global_registry::{{closure}} (registry.rs:194)
==3218139==    by 0x5973C23: std::sync::once::Once::call_once::{{closure}} (once.rs:264)
==3218139==    by 0x5EE6F47: std::sync::once::Once::call_inner (once.rs:416)
==3218139==    by 0x5973B50: std::sync::once::Once::call_once (once.rs:264)
==3218139==    by 0x596FE9A: rayon_core::registry::set_global_registry (registry.rs:193)
==3218139==
==3218139== 704 bytes in 1 blocks are indirectly lost in loss record 73 of 97
==3218139==    at 0x483A809: malloc (vg_replace_malloc.c:307)
==3218139==    by 0x4B3BA4B: alloc::alloc::alloc (alloc.rs:80)
==3218139==    by 0x4B3BE83: <alloc::alloc::Global as core::alloc::AllocRef>::alloc (alloc.rs:174)
==3218139==    by 0x4AE9B76: alloc::raw_vec::RawVec<T,A>::allocate_in (raw_vec.rs:152)
==3218139==    by 0x4AEDBDB: alloc::raw_vec::RawVec<T,A>::with_capacity_in (raw_vec.rs:135)
==3218139==    by 0x4AE62FD: alloc::raw_vec::RawVec<T>::with_capacity (raw_vec.rs:92)
==3218139==    by 0x4B4075D: alloc::vec::Vec<T>::with_capacity (vec.rs:358)
==3218139==    by 0x4B4DC63: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T,I>>::from_iter (vec.rs:2073)
==3218139==    by 0x4B5146B: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter (vec.rs:1981)
==3218139==    by 0x4AFF6EF: core::iter::traits::iterator::Iterator::collect (iterator.rs:1660)
==3218139==    by 0x4B3AEB8: wasmtime::module::wasmtime_module_new::{{closure}} (module.rs:61)
==3218139==    by 0x4B2858D: wasmtime::error::handle_result (error.rs:30)

...

view this post on Zulip Wasmtime GitHub notifications bot (Jun 23 2020 at 19:23):

alexcrichton commented on Issue #1897:

I've opened https://github.com/bytecodealliance/wasmtime/issues/1913 to track the leak issues you were encountering.

Otherwise for API usage you'll want to *_delete(..) every handle you're given ownership of, but if you do that then no memory should be leaked.

And finally, this looks good otherwise, so I'm going to go ahead and merge!

Feel free to ping me on Zulip if you've still got memory issues though

view this post on Zulip Wasmtime GitHub notifications bot (Jun 23 2020 at 19:25):

thibaultcha commented on Issue #1897:

@alexcrichton I was writing something here to clarify that I read your comment at https://github.com/bytecodealliance/wasmtime/issues/1902#issuecomment-648330651 which answered my above question. Anyway, thank you for detailing it once again!

Thanks for the merge too.


Last updated: Oct 23 2024 at 20:03 UTC