Stream: git-wasmtime

Topic: wasmtime / issue #4149 Slides for Wasmtime meeting on 202...


view this post on Zulip Wasmtime GitHub notifications bot (May 12 2022 at 18:37):

bjorn3 commented on issue #4149:

Why must current_length be atomically incremented for all modules at the same time? Having a global lock for memory growing and then incrementing current_length on memory growing for each module individually would work fine, right? Any normal module would wait with using the extra memory until the memory grow operation returned, so no problems in that case. If a module tries to access the memory without waiting for the memory grow operation to return, it would have to do so on a different thread in which case missing synchronization would already allow spuriously crashing anyway.

view this post on Zulip Wasmtime GitHub notifications bot (May 12 2022 at 20:22):

abrown commented on issue #4149:

in which case missing synchronization would already allow spuriously crashing anyway

Not sure I understand exactly what you mean?

view this post on Zulip Wasmtime GitHub notifications bot (May 12 2022 at 21:17):

bjorn3 commented on issue #4149:

If you don't synchronize with the memory grow operation trying to access the new memory could cause a crash if you accessed it too early. That is before the other thread actually performed the memory grow operation. This means any memory allocator that wants to be crash free has to synchronize with the memory grow operation and thus would work independently of the current_length field being atomically updated across wasm modules or not.

view this post on Zulip Wasmtime GitHub notifications bot (May 12 2022 at 22:37):

abrown commented on issue #4149:

I think I see what you mean now. Yeah, in the memory allocator scenario you mention there may need to be another level of synchronization so why even protect the current_length field? Well, discussion in the threads proposal seems to come to the conclusion that synchronization is needed (quote):

For clarity, nothing has changed about the semantics of memory.size (which guarantees no subsequent OOB up to the bound it observes, requiring the lock implementation described above or some other synchronization).

My take on this is that we cannot allow the memory.grow to make more pages accessible and update the length field in such a way that memory.size could "pick up" the updated length while the pages are not yet accessible. One way to do this is with the RwLock I suggested but we did mention using atomics (but the memory.size would have to be atomic as well, right? Maybe not if we assume is always a lower, valid value...) or even lazily updating the length field (though I think we backed away from that).

But the other side of this is what @fitzgen suggested:

interesting case where mprotect happens before length field increments, other threads observe working accesses but read smaller memory.size; need lock to protect size too (need memory.size hostcall for correctness)

He's getting at the fact that we want memory.size to be consistent with those memory.grow operations. These two points, as well as observing that SpiderMonkey ends up using a lock here, suggests to me that a lock is probably the safest, simplest way to go.


My initial understanding of this comes from the proposal:

Note: When IsSharedArrayBuffer (M.[[BufferObject]]) is true, the return value should be the result of an atomic read-modify-write of the new size to the internal [[ArrayBufferByteLength]] slot. The ret value will be the value in pages read from the internal [[ArrayBufferByteLength]] slot before the modification to the resized size, which will be the current size of the memory.

This is TC39-related, but my reading of it is that since modifying the length field is atomic, the reads of that field must be synchronized.


Last updated: Nov 22 2024 at 16:03 UTC