Hi all,
On 0.19.0, valgrind keeps complaining about a possible leak:
==642999== 720 bytes in 1 blocks are still reachable in loss record 7 of 14
==642999== at 0x483A809: malloc (vg_replace_malloc.c:307)
==642999== by 0x4AFB4F1: alloc::collections::btree::map::BTreeMap<K,V>::insert (in /usr/local/opt/lib/libwasmtime.so)
==642999== by 0x4AF62E4: wasmtime::frame_info::register (in /usr/local/opt/lib/libwasmtime.so)
==642999== by 0x4ADFC4C: wasmtime::instance::Instance::new (in /usr/local/opt/lib/libwasmtime.so)
==642999== by 0x498A2F1: wasmtime_instance_new (in /usr/local/opt/lib/libwasmtime.so)
There seems to have been recent work in that area recently (https://github.com/bytecodealliance/wasmtime/commit/f30ce1fe9717aaef65d16b5b30498f8cabeb632b) - could Valgrind's concern be legitimate here?
I also see this one:
==706723== 8,192 bytes in 8 blocks are still reachable in loss record 32 of 34
==706723== at 0x483A809: malloc (vg_replace_malloc.c:307)
==706723== by 0x58A087B: alloc::alloc::alloc (alloc.rs:80)
==706723== by 0x58A1063: <alloc::alloc::Global as core::alloc::AllocRef>::alloc (alloc.rs:174)
==706723== by 0x5896D2D: alloc::raw_vec::RawVec<T,A>::allocate_in (raw_vec.rs:183)
==706723== by 0x58981A0: alloc::raw_vec::RawVec<T,A>::with_capacity_in (raw_vec.rs:159)
==706723== by 0x589695E: alloc::raw_vec::RawVec<T>::with_capacity (raw_vec.rs:90)
==706723== by 0x589914E: alloc::vec::Vec<T>::with_capacity (vec.rs:363)
==706723== by 0x589BBA0: crossbeam_deque::Buffer<T>::alloc (lib.rs:139)
==706723== by 0x589D079: crossbeam_deque::Worker<T>::new_lifo (lib.rs:342)
==706723== by 0x588041E: rayon_core::registry::Registry::new::{{closure}} (registry.rs:228)
==706723== by 0x58851EC: core::iter::adapters::map_fold::{{closure}} (mod.rs:833)
==706723== by 0x588A0FD: core::iter::traits::iterator::Iterator::fold (iterator.rs:2022)
==706723== by 0x58855C3: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold (mod.rs:873)
==706723== by 0x5884AAA: core::iter::traits::iterator::Iterator::unzip (iterator.rs:2710)
==706723== by 0x587F8FC: rayon_core::registry::Registry::new (registry.rs:223)
==706723== by 0x587F59D: rayon_core::registry::global_registry::{{closure}} (registry.rs:168)
==706723== by 0x587F709: rayon_core::registry::set_global_registry::{{closure}} (registry.rs:194)
==706723== by 0x5885B93: std::sync::once::Once::call_once::{{closure}} (once.rs:264)
==706723== by 0x5DBD689: std::sync::once::Once::call_inner (once.rs:416)
==706723== by 0x5885A40: std::sync::once::Once::call_once (once.rs:264)
==706723== by 0x587F666: rayon_core::registry::set_global_registry (registry.rs:193)
==706723== by 0x587F52E: rayon_core::registry::global_registry (registry.rs:168)
==706723== by 0x588053C: rayon_core::registry::Registry::current_num_threads (registry.rs:286)
==706723== by 0x58961E5: rayon_core::current_num_threads (lib.rs:84)
==706723== by 0x5574DA9: rayon::iter::plumbing::Splitter::new (mod.rs:267)
==706723== by 0x5574C76: rayon::iter::plumbing::LengthSplitter::new (mod.rs:315)
==706723== by 0x555D82F: rayon::iter::plumbing::bridge_producer_consumer (mod.rs:396)
==706723== by 0x555D5D4: <rayon::iter::plumbing::bridge::Callback<C> as rayon::iter::plumbing::ProducerCallback<I>>::callback (mod.rs:373)
==706723== by 0x553B5FF: <rayon::slice::Iter<T> as rayon::iter::IndexedParallelIterator>::with_producer (mod.rs:543)
==706723== by 0x555F05A: rayon::iter::plumbing::bridge (mod.rs:357)
==706723== by 0x553B4E4: <rayon::slice::Iter<T> as rayon::iter::ParallelIterator>::drive_unindexed (mod.rs:519)
==706723== by 0x54AB444: <rayon::iter::map_with::MapInit<I,INIT,F> as rayon::iter::ParallelIterator>::drive_unindexed (map_with.rs:382)
==706723== by 0x5550879: <rayon::iter::map::Map<I,F> as rayon::iter::ParallelIterator>::drive_unindexed (map.rs:49)
==706723== by 0x556D150: <rayon::iter::while_some::WhileSome<I> as rayon::iter::ParallelIterator>::drive_unindexed (while_some.rs:44)
==706723== by 0x557E907: <rayon::iter::fold::Fold<I,ID,F> as rayon::iter::ParallelIterator>::drive_unindexed (fold.rs:59)
==706723== by 0x5550638: <rayon::iter::map::Map<I,F> as rayon::iter::ParallelIterator>::drive_unindexed (map.rs:49)
==706723== by 0x556D62E: rayon::iter::reduce::reduce (reduce.rs:15)
==706723== by 0x55503BA: rayon::iter::ParallelIterator::reduce (mod.rs:900)
==706723== by 0x556A79E: rayon::iter::extend::collect (extend.rs:29)
==706723== by 0x54BE3EB: rayon::iter::collect::<impl rayon::iter::ParallelExtend<T> for alloc::vec::Vec<T>>::par_extend (mod.rs:163)
==706723== by 0x554985D: rayon::iter::from_par_iter::collect_extended (from_par_iter.rs:17)
==706723== by 0x54BE2DA: rayon::iter::from_par_iter::<impl rayon::iter::FromParallelIterator<T> for alloc::vec::Vec<T>>::from_par_iter (from_par_iter.rs:30)
==706723== by 0x556CFDA: rayon::iter::ParallelIterator::collect (mod.rs:1887)
==706723== by 0x55202BB: rayon::result::<impl rayon::iter::FromParallelIterator<core::result::Result<T,E>> for core::result::Result<C,E>>::from_par_iter (result.rs:121)
==706723== by 0x54AC388: rayon::iter::ParallelIterator::collect (mod.rs:1887)
==706723== by 0x53EF758: wasmtime_environ::cranelift::compile (cranelift.rs:426)
==706723== by 0x54532DE: wasmtime_environ::cache::ModuleCacheEntry::get_data (cache.rs:84)
==706723== by 0x53EF24A: <wasmtime_environ::cranelift::Cranelift as wasmtime_environ::compilation::Compiler>::compile_module (cranelift.rs:296)
==706723== by 0x507727F: wasmtime_jit::compiler::Compiler::compile (compiler.rs:147)
==706723== by 0x5068729: wasmtime_jit::instantiate::CompilationArtifacts::new (instantiate.rs:85)
==706723== by 0x50694E8: wasmtime_jit::instantiate::CompiledModule::new (instantiate.rs:148)
==706723== by 0x4DEB673: wasmtime::module::Module::compile (module.rs:303)
==706723== by 0x4DEB48F: wasmtime::module::Module::from_binary_unchecked (module.rs:274)
==706723== by 0x4DEB41E: wasmtime::module::Module::from_binary (module.rs:243)
==706723== by 0x4AFF8D0: wasmtime_module_new (module.rs:51)
Yet wasm_module_delete()
is properly called on this module.
cc @fitzgen (he/him)
I can look more into this on monday, but fwiw valgrind has a lot of false positives around rust lazy_static!
s, like the frame info that I see in that first backtrace
@Thibault Charbonnier do you have minimal steps to reproduce?
FWIW those look like "normal" leaks to me, wasmtime has some global data structures which can hold on to allocated memory which isn't deallocated when the program exits
that looks like an internal BTreeMap
and crossbeam-related data structures
so I don't think those are leaks for your appplication itself
Last updated: Nov 22 2024 at 17:03 UTC