matthargett opened issue #13255:
Feature
Support arm64_32 platforms (Apple Watch 4-8) in interpreter mode.
Benefit
Right now there are a few relatively trivial gaps that prevent wasmtime from running in interpreter mode on arm64_32 platforms. The ones I'm focused on are Apple Watch 4, 5, 6, 7, 8, SE 2, which represent ~100M devices worldwide. I have WasmEdge running on these devices, but its real-world performance simply doesn't hold a candle to wasmtime/cranelift.
Implementation
Apple Watch 4, 5, 6, 7, 8, and SE 2 ship an AArch64 ISA with an ILP32 ABI: 64-bit registers, 32-bit pointers. Apple/LLVM treat the arch token as arm64_32 (not as a gnu_ilp32-style environment qualifier the way
Linux does). There's just a few touchups that I've staged in forks, and have benchmarked end-to-end on Apple Watch SE 2 (watchOS 11.6.2) and Apple Watch 10 (watchOS 26.4):
Adds a third Arm64_32 variant to Aarch64Architecture so the
arm64_32-apple-watchos Rust target triple round-trips through
Triple::from_str / Display: https://github.com/rebeckerspecialties/target-lexicon/pull/1unwinder's default register-formatter is ambiguous due to 64-bit assumption: https://github.com/rebeckerspecialties/wasmtime/pull/1
it looks like mach2 already has the fix I need in its already-released 0.6 version:
https://github.com/JohnTitor/mach2/pull/50again, I've already integration tested this locally on real devices and in simulators. while the results are better than WasmCore, wasmtime doesn't always beat WAMR on iPhone XS / Apple Watch SE2, so follow-on PRs would be some optimizations (mostly already discussed for different good reasons).
FWIW, I've done similar Apple Watch-enabling work in wgpu and a few other projects recently that have been merged.
Alternatives
make me keep the wasmtime support for these 100+M devices in a separate fork.
alexcrichton added the wasmtime:platform-support label to Issue #13255.
alexcrichton commented on issue #13255:
This all seems quite reasonable to land to me, thanks for raising the issue here! You may have already found it but we have checks for a number of targets in CI and I think it'd be reasonable to add more targets like this there once the work is all complete to help weed out accidental future regressions.
Do you need guidance/help on any particular issue? Or are you mostly gauging interest in landing PRs for minor portability issues?
matthargett commented on issue #13255:
I think it'd be reasonable to add more targets like this there once the work is all complete to help weed out accidental future regressions.
Agreed. The good news is that wasmtime runs perfectly inside a watchOS simulator, unlike the RustNN work I contributed, so even a smoke test should be cheap to run in CI.
Do you need guidance/help on any particular issue? Or are you mostly gauging interest in landing PRs for minor portability issues?
No, i think I've got it. I just wanted to file a meta issue so the overarching goal was obvious. Based on my benchmarking and profiling of my app workload so far, I think the first optimization will be around call_indirect. I see there been a lot of discussions around that in wasmtime over the last few years, and it would probably help avoid the weakness of the branch predictor in the efficiency cores on the A12/S8 (iPhone XS/Apple Watch 6).
Happy to join discord/matrix if more discussion up front is helpful. Now that I'm over the feasibility hump, I'll probably be integrating and benchmarking in more rapid succession.
cfallin commented on issue #13255:
Wanted to +1 the "thanks for porting efforts" sentiment -- this is really valuable for us! and also comment on this:
Happy to join discord/matrix if more discussion up front is helpful. Now that I'm over the feasibility hump, I'll probably be integrating and benchmarking in more rapid succession.
We hang on on Zulip here if you want to have more detailed conversations during integration. (No need to drop in if you just send us working PRs -- great! -- but feel free to ask questions there.)
cfallin edited a comment on issue #13255:
Wanted to +1 the "thanks for porting efforts" sentiment -- this is really valuable for us! and also comment on this:
Happy to join discord/matrix if more discussion up front is helpful. Now that I'm over the feasibility hump, I'll probably be integrating and benchmarking in more rapid succession.
We hang out on Zulip here if you want to have more detailed conversations during integration. (No need to drop in if you just send us working PRs -- great! -- but feel free to ask questions there.)
Last updated: May 03 2026 at 22:13 UTC