Stream: git-wasmtime

Topic: wasmtime / PR #11707 ci: Cache Rust compilation and cargo...


view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 18:46):

tschneidereit opened PR #11707 from tschneidereit:actions-cache to bytecodealliance:main:

I noticed that Wasmtime uses almost no cache for its GitHub Actions workflows. Let's see how well adding a cache for target plus various cargo dirs works.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 18:46):

tschneidereit requested wasmtime-default-reviewers for a review on PR #11707.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 18:46):

tschneidereit requested cfallin for a review on PR #11707.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 19:20):

alexcrichton commented on PR #11707:

Personally I don't think we're a good fit for caching here, so I'm not sure about this. Some concerns I would have are:

The most plausible route I know of for caching would be something like sccache-level granularity rather than target-dir-granularity, but I also haven't tested out if that would help much. Our slowest builds are mostly emulation of s390x/riscv64 and Windows. Emulation makes sense and Windows is unfortunately just really slow

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 19:24):

tschneidereit updated PR #11707.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 19:27):

tschneidereit commented on PR #11707:

Yeah, it's possible that this will end up not being worth it: my first attempt shaved about 45 seconds off the build, and that might vary by which job wins the race to creating the cache entry.

I just force-pushed a new version using https://github.com/Swatinem/rust-cache. We'll see if that does any better at all. If not, the only other thing I can think of is to specifically enable caching for the longest-running jobs and nothing else. Or we'll just close this at that point.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 19:48):

tschneidereit commented on PR #11707:

I re-triggered the build with cache seeded, but I'm already pretty certain that this won't help as-is: the job name part of the cache keys for the test-* jobs is abbreviated in a way that makes exactly the longest-running jobs race for creating the cache entry :/

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 21:45):

tschneidereit updated PR #11707.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 22:03):

tschneidereit updated PR #11707.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 22:56):

tschneidereit updated PR #11707.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 23:01):

tschneidereit commented on PR #11707:

With a switch to only using the cache for the longest-running test job, I think this might work? It seems to reduce CI runtime by about 60-80 seconds, or ~10% or so, which doesn't seem too bad.

The last iteration also only caches dependencies now. With that, a cache entry for Linux is about 340MB, which should hopefully mean that for a full test run we should still stay way under 10GB, and hence shouldn't risk evicting the preexisting, much more long-term stable caches.

view this post on Zulip Wasmtime GitHub notifications bot (Sep 17 2025 at 23:03):

tschneidereit edited a comment on PR #11707:

With a switch to only using the cache for the longest-running test job, I think this might work? It seems to reduce CI runtime by about 60-80 seconds, or ~10% or so, which doesn't seem too bad.

The last iteration also only caches dependencies now. With that, a cache entry for Linux is about 400MB, which should hopefully mean that for a full test run we should still stay way under 10GB, and hence shouldn't risk evicting the preexisting, much more long-term stable caches.

[Edit: 400MB, not 340MB. I think that doesn't change the calculus though)

view this post on Zulip Wasmtime GitHub notifications bot (Sep 19 2025 at 15:28):

alexcrichton commented on PR #11707:

Could you run prtest:full for this too? I'm not actually sure how many wasmtime-cli builds we do but it would be good to confirm the total size is hopefully well under 10G. Only caching wasmtime-cli seems reasonable since that's our slowest test run mostly, with the one outlier being C API tests on Windows.

Another possible alternative, though, is to configure sccache for the wasmtime-cli test job too. That would I think yield effectively the same speedups with better cache eviction behavior because the cache entries are much more fine-grained.


Last updated: Dec 06 2025 at 07:03 UTC