Stream: git-wasmtime

Topic: wasmtime / issue #9683 Cranelift: The clif-util run comma...


view this post on Zulip Wasmtime GitHub notifications bot (Nov 26 2024 at 08:12):

abc767234318 opened issue #9683:

I constructed a clif file.

multi_func15.zip

I used the following command to run it.

clif-util run -v multi_func15.clif

But I got the following error:

Segmentation fault (core dumped)

view this post on Zulip Wasmtime GitHub notifications bot (Nov 26 2024 at 08:56):

bjorn3 commented on issue #9683:

The test probably reads or writes to memory it shouldn't. In any case please reduce your test cases rather than making others try to extract the issue from a ton of clif ir.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 26 2024 at 09:18):

abc767234318 commented on issue #9683:

The test probably reads or writes to memory it shouldn't. In any case please reduce your test cases rather than making others try to extract the issue from a ton of clif ir.

Ok, I will try to reduce the size of the test case.

view this post on Zulip Wasmtime GitHub notifications bot (Nov 26 2024 at 16:49):

cfallin commented on issue #9683:

In addition to @bjorn3's request, please post CLIF snippets here directly. A zip file is not a great way to communicate these examples as developers' time here is fairly limited. (And if the zip compression is necessary to make it practical to send, then it's too large and needs to be reduced!)

view this post on Zulip Wasmtime GitHub notifications bot (Nov 27 2024 at 07:37):

abc767234318 commented on issue #9683:

@bjorn3 @cfallin Hi, I've extracted the function where caused the segmentation fault to reduce this test case.

test optimize
set opt_level=speed
set enable_pinned_reg=true
set preserve_frame_pointers=true
target s390x
target aarch64
target riscv64
target x86_64


function %main(i64, i32) -> i64, f32 fast {
    ss0 = explicit_slot 8
    ss1 = explicit_slot 8
    sig0 = (i32, i64) -> f64, i32 fast

    const0 = 0x74b82aa0f192f5649620f8c17a3f6964
    const1 = 0xa2aa8b9ee040395c

block0(v2: i64, v3: i32):
    v4 = iconst.i8 92
    v5 = iconst.i16 1372
    v6 = iconst.i32 -493943460
    v7 = iconst.i64 0x78be_9a03_e28f_055c
    v8 = f16const 0x0.ef4p-14
    v9 = f32const 0x1.be8908p-3
    v10 = f64const 0x1.ceaf296167eb8p-1
    v11 = vconst.i8x16 const0
    v12 = vconst.i16x8 const0
    v13 = vconst.f16x8 const0
    v14 = vconst.i32x4 const0
    v15 = vconst.f32x4 const0
    v16 = vconst.i64x2 const0
    v17 = vconst.f64x2 const0
    v18 = vconst.i8x8 const1
    jump block2

block1:
    v31 = stack_addr.i32 ss0
    v32 = uload32 v31+4
    v33 = icmp.i16 slt v5, v5  ; v5 = 1372, v5 = 1372
    v34 = stack_addr.i32 ss0
    v35 = load.i64x2 v34+3
    v36 = uextend.i64 v27
    v37 = f64const 0x1.b67f3058b67e0p-2
    return v36, v9  ; v9 = 0x1.be8908p-3

block2:
    brif.i32 v6, block3, block6  ; v6 = -493943460

block3:
    v19 = rotr_imm.i16 v5, -17672  ; v5 = 1372
    v20 = urem.i64 v7, v7  ; v7 = 0x78be_9a03_e28f_055c, v7 = 0x78be_9a03_e28f_055c
    v21 = band.f16 v8, v8  ; v8 = 0x0.ef4p-14, v8 = 0x0.ef4p-14
    jump block5(v6, v4)  ; v6 = -493943460, v4 = 92

block5(v60: i32, v61: i8):
    v22 = stack_addr.i32 ss0
    v23 = load.i32 v22+1
    v24 = stack_addr.i32 ss1
    v25 = load.f16 v60+2
    v26 = f32const 0x1.a28e48p-2
    v27 = ishl_imm v60, -1794784619
    v28, v29 = umul_overflow.i8 v4, v4  ; v4 = 92, v4 = 92
    v30 = stack_addr.i32 ss0
    istore16.i32 v6, v60  ; v6 = -493943460
    jump block1

block6:
    v38 = stack_addr.i32 ss0
    v39 = load.i64 v38+4
    v40 = ineg.i64x2 v16  ; v16 = const0
    v41 = iconst.i32 -794804681
    v42 = popcnt.i16 v5  ; v5 = 1372
    jump block7

block7:
    v43 = udiv.i16 v5, v42  ; v5 = 1372
    v44 = srem.i64 v39, v2
    jump block9

block9:
    v45 = swizzle v11, v11  ; v11 = const0, v11 = const0
    v46 = stack_addr.i32 ss0
    v47 = load.i8 v46+5
    brif.i32 v3, block10, block11

block10:
    v48 = trunc.f32x4 v15  ; v15 = const0
    v49 = stack_addr.i32 ss0
    v50 = load.f64 v49+4
    v51 = stack_addr.i32 ss1
    v52 = atomic_load.i64 v51
    v53 = stack_addr.i32 ss1
    v54 = sload8.i16 v53+1
    v55 = stack_addr.i32 ss1
    v56 = sload8.i16 v55+1
    v57 = urem.i32 v41, v3  ; v41 = -794804681

    v63 = f64const 0x1.ceaf296167eb8p-1
    v64 = iconst.i32 100

    jump block12(v52)

block11:
    jump block12(v39)

block12(v62: i64):
    v58 = fvpromote_low v15  ; v15 = const0
    v59 = stack_addr.i32 ss0
    atomic_store.i64 v39, v59
    jump block5(v41, v47)  ; v41 = -794804681
}


; print: %main(-1, -1725923514)

After my analysis, the segmentation fault may occur in the following code block. I think it is due to memory out of bounds for the v25 = load.f16 v60+2 instruction.

block5(v60: i32, v61: i8):
    v22 = stack_addr.i32 ss0
    v23 = load.i32 v22+1
    v24 = stack_addr.i32 ss1
    v25 = load.f16 v60+2
    v26 = f32const 0x1.a28e48p-2
    v27 = ishl_imm v60, -1794784619
    v28, v29 = umul_overflow.i8 v4, v4  ; v4 = 92, v4 = 92
    v30 = stack_addr.i32 ss0
    istore16.i32 v6, v60  ; v6 = -493943460
    jump block1

And the following instruction with segmentation fault might be the assembler instruction corresponding to v25 = load.f16 v60+2.
![image](https://github.com/user-attachments/assets/892dcb85-2c6a-4900-b743-e1df193d0413)

view this post on Zulip Wasmtime GitHub notifications bot (Nov 27 2024 at 07:57):

cfallin commented on issue #9683:

Hi @abc767234318 -- to be blunt, bug reports with this much un-minimized detail and this little analysis are not very useful to us. You are likely generating this from fuzzing or some other random testing strategy (yes?) and submitting any crashes you find. That takes very little effort. The hard part is determining whether the issue is "real" -- that is, due to a bug in Cranelift, rather than your testing infrastructure or your assumptions.

When we get a flood of reports like this (7 from you in the past 6 days, by my count) it is basically performing a human denial-of-service attack on our project. We are interested in bugs, of course, but the social contract of open-source is that you need to participate fairly and do your share of the work.

Could you please, for this and any future bug reports you submit, add analysis that indicates why you believe the program should not behave the way it does? Here, for example, could you indicate why you believe the load should not be out of bounds, what the value of the address operand is, whether the code runs successfully on other ISAs or the CLIF interpreter if applicable, and any other questions you can think of to help reduce the issue?

view this post on Zulip Wasmtime GitHub notifications bot (Nov 27 2024 at 08:37):

abc767234318 commented on issue #9683:

Hi @abc767234318 -- to be blunt, bug reports with this much un-minimized detail and this little analysis are not very useful to us. You are likely generating this from fuzzing or some other random testing strategy (yes?) and submitting any crashes you find. That takes very little effort. The hard part is determining whether the issue is "real" -- that is, due to a bug in Cranelift, rather than your testing infrastructure or your assumptions.

When we get a flood of reports like this (7 from you in the past 6 days, by my count) it is basically performing a human denial-of-service attack on our project. We are interested in bugs, of course, but the social contract of open-source is that you need to participate fairly and do your share of the work.

Could you please, for this and any future bug reports you submit, add analysis that indicates _why_ you believe the program should not behave the way it does? Here, for example, could you indicate why you believe the load should not be out of bounds, what the value of the address operand is, whether the code runs successfully on other ISAs or the CLIF interpreter if applicable, and any other questions you can think of to help reduce the issue?

I apologize for not providing sufficient detail and analysis in my reports. I appreciate the importance of thorough investigations in identifying real issues, especially in an open-source project. I want to clarify that my reports are generated from fuzzing and random testing strategies, as you suspected, and I understand that this approach may not always yield the most actionable insights. My current understanding of cranelift compiler and its internals is limited, which has contributed to the lack of detailed analysis in my reports.

I will strive to include more context and reasoning in my reports. I’ll make sure to analyze the behavior of the program more thoroughly, including addressing the specific points you mentioned.

Thank you for your patience, and I appreciate your guidance as I work to improve my contributions.

view this post on Zulip Wasmtime GitHub notifications bot (Dec 06 2024 at 19:51):

fitzgen commented on issue #9683:

Not directly applicable to clif test cases, but you may find https://docs.wasmtime.dev/contributing-reducing-test-cases.html enlightening.


Last updated: Dec 23 2024 at 12:05 UTC