Stream: git-wasmtime

Topic: wasmtime / issue #7215 cranelift: Allow allocating heaps ...


view this post on Zulip Wasmtime GitHub notifications bot (Oct 11 2023 at 16:14):

afonso360 added the cranelift label to Issue #7215.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 11 2023 at 16:14):

afonso360 opened issue #7215:

:wave: Hey,

This was something that @jameysharp brought up during today's cranelift meeting.

Feature

We could add support to our runtests to allocate a heap memory region and pass it to the function under test.

Benefit

This allows us to do two things:

Implementation

We used to have something similar to this in https://github.com/bytecodealliance/wasmtime/pull/3302 that was then removed in https://github.com/bytecodealliance/wasmtime/pull/5386

We had an annotation like ; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8 which would pass the address of the heap via the vmctx parameter.

That used to involve some special handling for vmctx parameters that I think is incompatible with what we do today (don't quote me on this).

But we could do something similar like:

function %test(i64, 10) -> i64 {
...
}
; heap: <heap0> size=16, align=8
; run: %test(<heap0>, 10) == 10

And the <heap0> annotation would be resolved with the address of the heap when calling the test.

Alternatives

An alternative to testing load fusion for vectors, is to add stack alignment flags to our stack slots. But doing the alias analysis bits is slightly harder although there are probably workarounds there too.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 11 2023 at 16:15):

afonso360 edited issue #7215:

:wave: Hey,

This was something that @jameysharp brought up during today's cranelift meeting.

Feature

We could add support to our runtests to allocate a heap memory region and pass it to the function under test.

Benefit

This allows us to do two things:

Implementation

We used to have something similar to this in https://github.com/bytecodealliance/wasmtime/pull/3302 that was then removed in https://github.com/bytecodealliance/wasmtime/pull/5386

We had an annotation like ; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8 which would pass the address of the heap via the vmctx parameter.

That used to involve some special handling for vmctx parameters that I think is incompatible with what we do today (don't quote me on this).

But we could do something similar like:

function %test(i64, i8) -> i64 {
...
}
; heap: <heap0> size=16, align=8
; run: %test(<heap0>, 10) == 10

And the <heap0> annotation would be resolved with the address of the heap when calling the test.

Alternatives

An alternative to testing load fusion for vectors, is to add stack alignment flags to our stack slots. But doing the alias analysis bits is slightly harder although there are probably workarounds there too.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 11 2023 at 16:17):

afonso360 edited issue #7215:

:wave: Hey,

This was something that @jameysharp brought up during today's cranelift meeting.

Feature

We could add support to our runtests to allocate a heap memory region and pass it to the function under test.

Benefit

This allows us to do two things:

Implementation

We used to have something similar to this in https://github.com/bytecodealliance/wasmtime/pull/3302 that was then removed in https://github.com/bytecodealliance/wasmtime/pull/5386

We had an annotation like ; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8 which would pass the address of the heap via the vmctx parameter.

That used to involve some special handling for vmctx parameters that I think is incompatible with what we do today (don't quote me on this).

But we could do something similar like:

function %test(i64, i8) -> i64 {
...
}
; heap: <heap0> size=16, align=8
; run: %test(<heap0>, 10) == 10

And the <heap0> annotation would be resolved with the address of the heap when calling the test.

Alternatives

An alternative to testing load fusion for vectors, is to add stack alignment flags to our stack slots. But doing the alias analysis bits is slightly harder although there are probably workarounds there too.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 11 2023 at 16:19):

afonso360 edited issue #7215:

:wave: Hey,

This was something that @jameysharp brought up during today's cranelift meeting.

Feature

We could add support to our runtests to allocate a heap memory region and pass it to the function under test.

Benefit

This allows us to do a few things:

Implementation

We used to have something similar to this in https://github.com/bytecodealliance/wasmtime/pull/3302 that was then removed in https://github.com/bytecodealliance/wasmtime/pull/5386

We had an annotation like ; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8 which would pass the address of the heap via the vmctx parameter.

That used to involve some special handling for vmctx parameters that I think is incompatible with what we do today (don't quote me on this).

But we could do something similar like:

function %test(i64, i8) -> i64 {
...
}
; heap: <heap0> size=16, align=8
; run: %test(<heap0>, 10) == 10

And the <heap0> annotation would be resolved with the address of the heap when calling the test.

Alternatives

An alternative to testing load fusion for vectors, is to add stack alignment flags to our stack slots. But doing the alias analysis bits is slightly harder although there are probably workarounds there too.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 11 2023 at 16:22):

afonso360 edited issue #7215:

:wave: Hey,

This was something that @jameysharp brought up during today's cranelift meeting.

Feature

We could add support to our runtests to allocate a heap memory region and pass it to the function under test.

Benefit

This allows us to do a few things:

Implementation

We used to have something similar to this in https://github.com/bytecodealliance/wasmtime/pull/3302 that was then removed in https://github.com/bytecodealliance/wasmtime/pull/5386

We had an annotation like ; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8 which would pass the address of the heap via the vmctx parameter.

That used to involve some special handling for vmctx parameters that I think is incompatible with what we do today (don't quote me on this).

But we could do something similar like:

function %test(i64, i8) -> i64 {
...
}
; heap: <heap0> size=16, align=8
; run: %test(<heap0>, 10) == 10

And the <heap0> annotation would be resolved with the address of the heap when calling the test.

Alternatives

An alternative to testing load fusion for vectors, is to add stack alignment flags to our stack slots. But doing the alias analysis bits is slightly harder although there are probably workarounds there too.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 12 2023 at 16:21):

jameysharp commented on issue #7215:

Your observations in #7225 reminded me that, in both fuzz tests and runtests, we can theoretically pass pointers from one function to another.

So one approach in runtests would be to get a stackslot pointer in one function and pass it to a callee, which can then treat the pointer as heap or vmctx or whatever. In fuzzgen, we could add a fake "address" type to the set of function parameter types we might generate; callers would fill such arguments in using the same selection logic as addresses for loads and stores, and callees would add those arguments to the set of addresses which loads and stores can choose from.

Since we don't do any inter-procedural optimization, that ensures that code generation can't do any hypothetical optimizations that would rely on knowing the pointer refers to the stack.

We still need the ability to specify alignment on stack slots though.

view this post on Zulip Wasmtime GitHub notifications bot (Oct 12 2023 at 19:04):

afonso360 commented on issue #7215:

That's a great idea. We might not even have to do anything too hacky since we don't yet use reftypes for anything we could use those to signal, this is a valid address for loads and stores.


Last updated: Oct 23 2024 at 20:03 UTC