tschneidereit opened PR #11707 from tschneidereit:actions-cache to bytecodealliance:main:
I noticed that Wasmtime uses almost no cache for its GitHub Actions workflows. Let's see how well adding a cache for
targetplus variouscargodirs works.
tschneidereit requested wasmtime-default-reviewers for a review on PR #11707.
tschneidereit requested cfallin for a review on PR #11707.
alexcrichton commented on PR #11707:
Personally I don't think we're a good fit for caching here, so I'm not sure about this. Some concerns I would have are:
- Right now this is keyed on os + lock file but what exactly is built/cached in a
targetdir depends on the build itself. That means that as-is there may not be much sharing between builders with whomever wins the race to populate the cache (especially with features in play changing deep in deps). If we were to fully shard the cache based on build then I'd fear we would blow the limits quickly. We have >100 CI entries and with a 10G limit for the whole repo that gives ~100M per cache entry, and a build of Wasmtime is much larger than that.- We don't really need to cache Cargo registry lookups any more AFAIK as it's such a small portion of the build itself.
The most plausible route I know of for caching would be something like
sccache-level granularity rather than target-dir-granularity, but I also haven't tested out if that would help much. Our slowest builds are mostly emulation of s390x/riscv64 and Windows. Emulation makes sense and Windows is unfortunately just really slow
tschneidereit updated PR #11707.
tschneidereit commented on PR #11707:
Yeah, it's possible that this will end up not being worth it: my first attempt shaved about 45 seconds off the build, and that might vary by which job wins the race to creating the cache entry.
I just force-pushed a new version using https://github.com/Swatinem/rust-cache. We'll see if that does any better at all. If not, the only other thing I can think of is to specifically enable caching for the longest-running jobs and nothing else. Or we'll just close this at that point.
tschneidereit commented on PR #11707:
I re-triggered the build with cache seeded, but I'm already pretty certain that this won't help as-is: the job name part of the cache keys for the
test-*jobs is abbreviated in a way that makes exactly the longest-running jobs race for creating the cache entry :/
tschneidereit updated PR #11707.
tschneidereit updated PR #11707.
tschneidereit updated PR #11707.
tschneidereit commented on PR #11707:
With a switch to only using the cache for the longest-running test job, I think this might work? It seems to reduce CI runtime by about 60-80 seconds, or ~10% or so, which doesn't seem too bad.
The last iteration also only caches dependencies now. With that, a cache entry for Linux is about 340MB, which should hopefully mean that for a full test run we should still stay way under 10GB, and hence shouldn't risk evicting the preexisting, much more long-term stable caches.
tschneidereit edited a comment on PR #11707:
With a switch to only using the cache for the longest-running test job, I think this might work? It seems to reduce CI runtime by about 60-80 seconds, or ~10% or so, which doesn't seem too bad.
The last iteration also only caches dependencies now. With that, a cache entry for Linux is about 400MB, which should hopefully mean that for a full test run we should still stay way under 10GB, and hence shouldn't risk evicting the preexisting, much more long-term stable caches.
[Edit: 400MB, not 340MB. I think that doesn't change the calculus though)
alexcrichton commented on PR #11707:
Could you run
prtest:fullfor this too? I'm not actually sure how many wasmtime-cli builds we do but it would be good to confirm the total size is hopefully well under 10G. Only caching wasmtime-cli seems reasonable since that's our slowest test run mostly, with the one outlier being C API tests on Windows.Another possible alternative, though, is to configure sccache for the wasmtime-cli test job too. That would I think yield effectively the same speedups with better cache eviction behavior because the cache entries are much more fine-grained.
Last updated: Dec 06 2025 at 07:03 UTC