pepyakin opened issue #4170:
.clif
Test Casefunction u0:0(i64 vmctx, i64) wasmtime_system_v { gv0 = vmctx gv1 = load.i64 notrap aligned readonly gv0 gv2 = load.i64 notrap aligned gv1 gv3 = vmctx stack_limit = gv2 block0(v0: i64, v1: i64): v2 = global_value.i64 gv3 v3 = load.i64 notrap aligned v2 v4 = get_pinned_reg.i64 v5 = iadd_imm v4, 1 set_pinned_reg v5 jump block1 block1: return }
Steps to Reproduce
$ clif-util compile -D --target x86_64 pinned_reg.clif --set enable_pinned_reg .byte 85, 72, 137, 229, 72, 131, 236, 16, 76, 137, 60, 36, 76, 139, 15, 73, 131, 199, 1, 76, 139, 60, 36, 72, 131, 196, 16, 72, 137, 236, 93, 195 Disassembly of 32 bytes: 0: 55 push rbp 1: 48 89 e5 mov rbp, rsp 4: 48 83 ec 10 sub rsp, 0x10 8: 4c 89 3c 24 mov qword ptr [rsp], r15 c: 4c 8b 0f mov r9, qword ptr [rdi] f: 49 83 c7 01 add r15, 1 13: 4c 8b 3c 24 mov r15, qword ptr [rsp] 17: 48 83 c4 10 add rsp, 0x10 1b: 48 89 ec mov rsp, rbp 1e: 5d pop rbp 1f: c3 ret
Expected Results
get_pinned_reg
andset_pinned_reg
either work in this situation or at least the verifier rejects the code.Actual Results
r15, the pinned register, gets saved and restored as a CSR making it impossible to use as a pinned register.
Versions and Environment
Cranelift version or commit: f19d8cc85
Extra Info
Found this while hacking on https://github.com/bytecodealliance/wasmtime/issues/4109#issuecomment-1130527740. I've worked around it by setting this predicate in
gen_clobber_restore
.if call_conv == isa::CallConv::WasmtimeSystemV && flags.enable_pinned_reg() && reg.to_reg() == regs::r15().to_real_reg().unwrap() { // HACK: don't restore r15 if pinned_reg enabled. // continue; }
I am pretty sure that this is also the case for the aarch64 backend.
cc @cfallin
pepyakin labeled issue #4170:
.clif
Test Casefunction u0:0(i64 vmctx, i64) wasmtime_system_v { gv0 = vmctx gv1 = load.i64 notrap aligned readonly gv0 gv2 = load.i64 notrap aligned gv1 gv3 = vmctx stack_limit = gv2 block0(v0: i64, v1: i64): v2 = global_value.i64 gv3 v3 = load.i64 notrap aligned v2 v4 = get_pinned_reg.i64 v5 = iadd_imm v4, 1 set_pinned_reg v5 jump block1 block1: return }
Steps to Reproduce
$ clif-util compile -D --target x86_64 pinned_reg.clif --set enable_pinned_reg .byte 85, 72, 137, 229, 72, 131, 236, 16, 76, 137, 60, 36, 76, 139, 15, 73, 131, 199, 1, 76, 139, 60, 36, 72, 131, 196, 16, 72, 137, 236, 93, 195 Disassembly of 32 bytes: 0: 55 push rbp 1: 48 89 e5 mov rbp, rsp 4: 48 83 ec 10 sub rsp, 0x10 8: 4c 89 3c 24 mov qword ptr [rsp], r15 c: 4c 8b 0f mov r9, qword ptr [rdi] f: 49 83 c7 01 add r15, 1 13: 4c 8b 3c 24 mov r15, qword ptr [rsp] 17: 48 83 c4 10 add rsp, 0x10 1b: 48 89 ec mov rsp, rbp 1e: 5d pop rbp 1f: c3 ret
Expected Results
get_pinned_reg
andset_pinned_reg
either work in this situation or at least the verifier rejects the code.Actual Results
r15, the pinned register, gets saved and restored as a CSR making it impossible to use as a pinned register.
Versions and Environment
Cranelift version or commit: f19d8cc85
Extra Info
Found this while hacking on https://github.com/bytecodealliance/wasmtime/issues/4109#issuecomment-1130527740. I've worked around it by setting this predicate in
gen_clobber_restore
.if call_conv == isa::CallConv::WasmtimeSystemV && flags.enable_pinned_reg() && reg.to_reg() == regs::r15().to_real_reg().unwrap() { // HACK: don't restore r15 if pinned_reg enabled. // continue; }
I am pretty sure that this is also the case for the aarch64 backend.
cc @cfallin
pepyakin labeled issue #4170:
.clif
Test Casefunction u0:0(i64 vmctx, i64) wasmtime_system_v { gv0 = vmctx gv1 = load.i64 notrap aligned readonly gv0 gv2 = load.i64 notrap aligned gv1 gv3 = vmctx stack_limit = gv2 block0(v0: i64, v1: i64): v2 = global_value.i64 gv3 v3 = load.i64 notrap aligned v2 v4 = get_pinned_reg.i64 v5 = iadd_imm v4, 1 set_pinned_reg v5 jump block1 block1: return }
Steps to Reproduce
$ clif-util compile -D --target x86_64 pinned_reg.clif --set enable_pinned_reg .byte 85, 72, 137, 229, 72, 131, 236, 16, 76, 137, 60, 36, 76, 139, 15, 73, 131, 199, 1, 76, 139, 60, 36, 72, 131, 196, 16, 72, 137, 236, 93, 195 Disassembly of 32 bytes: 0: 55 push rbp 1: 48 89 e5 mov rbp, rsp 4: 48 83 ec 10 sub rsp, 0x10 8: 4c 89 3c 24 mov qword ptr [rsp], r15 c: 4c 8b 0f mov r9, qword ptr [rdi] f: 49 83 c7 01 add r15, 1 13: 4c 8b 3c 24 mov r15, qword ptr [rsp] 17: 48 83 c4 10 add rsp, 0x10 1b: 48 89 ec mov rsp, rbp 1e: 5d pop rbp 1f: c3 ret
Expected Results
get_pinned_reg
andset_pinned_reg
either work in this situation or at least the verifier rejects the code.Actual Results
r15, the pinned register, gets saved and restored as a CSR making it impossible to use as a pinned register.
Versions and Environment
Cranelift version or commit: f19d8cc85
Extra Info
Found this while hacking on https://github.com/bytecodealliance/wasmtime/issues/4109#issuecomment-1130527740. I've worked around it by setting this predicate in
gen_clobber_restore
.if call_conv == isa::CallConv::WasmtimeSystemV && flags.enable_pinned_reg() && reg.to_reg() == regs::r15().to_real_reg().unwrap() { // HACK: don't restore r15 if pinned_reg enabled. // continue; }
I am pretty sure that this is also the case for the aarch64 backend.
cc @cfallin
pepyakin labeled issue #4170:
.clif
Test Casefunction u0:0(i64 vmctx, i64) wasmtime_system_v { gv0 = vmctx gv1 = load.i64 notrap aligned readonly gv0 gv2 = load.i64 notrap aligned gv1 gv3 = vmctx stack_limit = gv2 block0(v0: i64, v1: i64): v2 = global_value.i64 gv3 v3 = load.i64 notrap aligned v2 v4 = get_pinned_reg.i64 v5 = iadd_imm v4, 1 set_pinned_reg v5 jump block1 block1: return }
Steps to Reproduce
$ clif-util compile -D --target x86_64 pinned_reg.clif --set enable_pinned_reg .byte 85, 72, 137, 229, 72, 131, 236, 16, 76, 137, 60, 36, 76, 139, 15, 73, 131, 199, 1, 76, 139, 60, 36, 72, 131, 196, 16, 72, 137, 236, 93, 195 Disassembly of 32 bytes: 0: 55 push rbp 1: 48 89 e5 mov rbp, rsp 4: 48 83 ec 10 sub rsp, 0x10 8: 4c 89 3c 24 mov qword ptr [rsp], r15 c: 4c 8b 0f mov r9, qword ptr [rdi] f: 49 83 c7 01 add r15, 1 13: 4c 8b 3c 24 mov r15, qword ptr [rsp] 17: 48 83 c4 10 add rsp, 0x10 1b: 48 89 ec mov rsp, rbp 1e: 5d pop rbp 1f: c3 ret
Expected Results
get_pinned_reg
andset_pinned_reg
either work in this situation or at least the verifier rejects the code.Actual Results
r15, the pinned register, gets saved and restored as a CSR making it impossible to use as a pinned register.
Versions and Environment
Cranelift version or commit: f19d8cc85
Extra Info
Found this while hacking on https://github.com/bytecodealliance/wasmtime/issues/4109#issuecomment-1130527740. I've worked around it by setting this predicate in
gen_clobber_restore
.if call_conv == isa::CallConv::WasmtimeSystemV && flags.enable_pinned_reg() && reg.to_reg() == regs::r15().to_real_reg().unwrap() { // HACK: don't restore r15 if pinned_reg enabled. // continue; }
I am pretty sure that this is also the case for the aarch64 backend.
cc @cfallin
pepyakin edited issue #4170:
.clif
Test Casefunction u0:0(i64 vmctx, i64) wasmtime_system_v { gv0 = vmctx gv1 = load.i64 notrap aligned readonly gv0 gv2 = load.i64 notrap aligned gv1 gv3 = vmctx stack_limit = gv2 block0(v0: i64, v1: i64): v2 = global_value.i64 gv3 v3 = load.i64 notrap aligned v2 v4 = get_pinned_reg.i64 v5 = iadd_imm v4, 1 set_pinned_reg v5 jump block1 block1: return }
Steps to Reproduce
$ clif-util compile -D --target x86_64 pinned_reg.clif --set enable_pinned_reg .byte 85, 72, 137, 229, 72, 131, 236, 16, 76, 137, 60, 36, 76, 139, 15, 73, 131, 199, 1, 76, 139, 60, 36, 72, 131, 196, 16, 72, 137, 236, 93, 195 Disassembly of 32 bytes: 0: 55 push rbp 1: 48 89 e5 mov rbp, rsp 4: 48 83 ec 10 sub rsp, 0x10 8: 4c 89 3c 24 mov qword ptr [rsp], r15 c: 4c 8b 0f mov r9, qword ptr [rdi] f: 49 83 c7 01 add r15, 1 13: 4c 8b 3c 24 mov r15, qword ptr [rsp] 17: 48 83 c4 10 add rsp, 0x10 1b: 48 89 ec mov rsp, rbp 1e: 5d pop rbp 1f: c3 ret
Expected Results
get_pinned_reg
andset_pinned_reg
either work in this situation or at least the verifier rejects the code.Actual Results
r15, the pinned register, gets saved and restored as a CSR making it impossible to use as a pinned register.
Versions and Environment
Cranelift version or commit: f19d8cc85
Extra Info
Found this while hacking on https://github.com/bytecodealliance/wasmtime/issues/4109#issuecomment-1130527740. I've worked around it by setting this predicate in
gen_clobber_restore
.if call_conv == isa::CallConv::WasmtimeSystemV && flags.enable_pinned_reg() && reg.to_reg() == regs::r15().to_real_reg().unwrap() { // HACK: don't restore r15 if pinned_reg enabled. continue; }
I am pretty sure that this is also the case for the aarch64 backend.
cc @cfallin
pepyakin edited issue #4170:
.clif
Test Casefunction u0:0(i64 vmctx, i64) wasmtime_system_v { gv0 = vmctx gv1 = load.i64 notrap aligned readonly gv0 gv2 = load.i64 notrap aligned gv1 gv3 = vmctx stack_limit = gv2 block0(v0: i64, v1: i64): v2 = global_value.i64 gv3 v3 = load.i64 notrap aligned v2 v4 = get_pinned_reg.i64 v5 = iadd_imm v4, 1 set_pinned_reg v5 jump block1 block1: return }
Steps to Reproduce
$ clif-util compile -D --target x86_64 pinned_reg.clif --set enable_pinned_reg .byte 85, 72, 137, 229, 72, 131, 236, 16, 76, 137, 60, 36, 76, 139, 15, 73, 131, 199, 1, 76, 139, 60, 36, 72, 131, 196, 16, 72, 137, 236, 93, 195 Disassembly of 32 bytes: 0: 55 push rbp 1: 48 89 e5 mov rbp, rsp 4: 48 83 ec 10 sub rsp, 0x10 8: 4c 89 3c 24 mov qword ptr [rsp], r15 c: 4c 8b 0f mov r9, qword ptr [rdi] f: 49 83 c7 01 add r15, 1 13: 4c 8b 3c 24 mov r15, qword ptr [rsp] 17: 48 83 c4 10 add rsp, 0x10 1b: 48 89 ec mov rsp, rbp 1e: 5d pop rbp 1f: c3 ret
Expected Results
get_pinned_reg
andset_pinned_reg
either work in this situation or at least the verifier rejects the code.Actual Results
r15, the pinned register, gets saved and restored as a CSR making it impossible to use as a pinned register.
Versions and Environment
Cranelift version or commit: f19d8cc85
Extra Info
Found this while hacking on https://github.com/bytecodealliance/wasmtime/issues/4109. I've worked around it by setting this predicate in
gen_clobber_restore
.if call_conv == isa::CallConv::WasmtimeSystemV && flags.enable_pinned_reg() && reg.to_reg() == regs::r15().to_real_reg().unwrap() { // HACK: don't restore r15 if pinned_reg enabled. continue; }
I am pretty sure that this is also the case for the aarch64 backend.
cc @cfallin
cfallin commented on issue #4170:
I'll take a look at this, thanks! I think this is probably a result of only ever having used pinned registers with the SpiderMonkey ("baldrdash") calling convention support, so the SysV code doesn't account for it.
cfallin closed issue #4170:
.clif
Test Casefunction u0:0(i64 vmctx, i64) wasmtime_system_v { gv0 = vmctx gv1 = load.i64 notrap aligned readonly gv0 gv2 = load.i64 notrap aligned gv1 gv3 = vmctx stack_limit = gv2 block0(v0: i64, v1: i64): v2 = global_value.i64 gv3 v3 = load.i64 notrap aligned v2 v4 = get_pinned_reg.i64 v5 = iadd_imm v4, 1 set_pinned_reg v5 jump block1 block1: return }
Steps to Reproduce
$ clif-util compile -D --target x86_64 pinned_reg.clif --set enable_pinned_reg .byte 85, 72, 137, 229, 72, 131, 236, 16, 76, 137, 60, 36, 76, 139, 15, 73, 131, 199, 1, 76, 139, 60, 36, 72, 131, 196, 16, 72, 137, 236, 93, 195 Disassembly of 32 bytes: 0: 55 push rbp 1: 48 89 e5 mov rbp, rsp 4: 48 83 ec 10 sub rsp, 0x10 8: 4c 89 3c 24 mov qword ptr [rsp], r15 c: 4c 8b 0f mov r9, qword ptr [rdi] f: 49 83 c7 01 add r15, 1 13: 4c 8b 3c 24 mov r15, qword ptr [rsp] 17: 48 83 c4 10 add rsp, 0x10 1b: 48 89 ec mov rsp, rbp 1e: 5d pop rbp 1f: c3 ret
Expected Results
get_pinned_reg
andset_pinned_reg
either work in this situation or at least the verifier rejects the code.Actual Results
r15, the pinned register, gets saved and restored as a CSR making it impossible to use as a pinned register.
Versions and Environment
Cranelift version or commit: f19d8cc85
Extra Info
Found this while hacking on https://github.com/bytecodealliance/wasmtime/issues/4109. I've worked around it by setting this predicate in
gen_clobber_restore
.if call_conv == isa::CallConv::WasmtimeSystemV && flags.enable_pinned_reg() && reg.to_reg() == regs::r15().to_real_reg().unwrap() { // HACK: don't restore r15 if pinned_reg enabled. continue; }
I am pretty sure that this is also the case for the aarch64 backend.
cc @cfallin
Last updated: Dec 23 2024 at 12:05 UTC