I'm about to enable the merge queue which means that no one will be able to merge anything until the first PR merges, and after the first PR merges everyone will have to rebase on that PR to ensure that their PR CI is running the right CI to get added to the queue.
I'll post here once I actually update the branch protections.
I plan to do it within the next half hour though
hm ok, actually gonna update the settings now, so nothing will be able to merge for a bit while https://github.com/bytecodealliance/wasmtime/pull/5766 is testing
I'm realizing now I forgot to write up documentation, so I'll do that once this merges as well
ok queue is turned on and 5766 is in the queue
Is there a way to actually watch the CI run for the merge queue? It doesn't seem like the "Merge Queue" view provides it
ah, nevermind, you can get to it from the toplevel actions list
once you make it to the merge queue page which is itself hard to find:
but on that page you can click the little icon that looks like a paper with a checkmark to...
ok I forget that it doesn't actually show it there any more
in any case you can click on the "Actions" tab and this is the build for the merge
none of this is surfaced in the PR UI AFAIK
ok I just added "Rustfmt" as a required check
so if you click the paper-with-checkmark icon the "Rustfmt" job should always be there if it's been queued up and you can click "Details" to go to the full build
(I don't think that the UI is great here)
CI has passed now, waiting for the queue to pick it up
and it's merged!
I've rebased and queued https://github.com/bytecodealliance/wasmtime/pull/5797 now
CInis failing due to rate limits. I need to go to the vet right now so I'll be back in a bit. I think this is the issue labeler job eating all our capacity. I'll try disabling that when I get back
ah, I didn't realize the labeler took away CI capacity. Slightly spicy hot take: if we're CI-bound and talking about finding funding for additional resources etc, maybe we disable it permanently?
(I wonder if there are other ways to get labels, external bots or somesuch; alternately it's not the worst thing in the world to apply labels when creating a PR)
I am supposed to be engaged in finding you all more capacity in some form. I have failed in the current "who's leaving their job" environment. I'll see what I can do here.
I'm not certain the labeler is the issue it's just a hunch at this point. If it's the problem I'll give it its own dedicated token which should have a separate rate limit
Ok looking like fewer failures so I'll look into a separate token for the labeler job
We can also turn down the frequency of the issue labeler to every hour or something, fwiw. The jobs last like a couple minutes last time I checked though, so it would be surprising if they were using up any significant amount of our capacity...
oh its about API rate limits? in that case, in addition to scheduling it less often, we can make it process fewer issues per run
The labeler scrapes all open prs and issues right?
not all, there is a configurable number, 40 by default iirc
I'm not sure of the interval of the rate limit and if it's blowing the whole limit in one run or repeated runs in the interval
Wow if it's just 40 we shouldn't be anywhere near rate limits
We've had a lot of historical issues with rate limits that are transient that I've never bottomed out
https://github.com/bytecodealliance/label-messager-action/blob/main/action.yml#L10-L12
https://github.com/bytecodealliance/subscribe-to-label-action/blob/main/action.yml#L10-L12
Hm ok the rate issue is probably totally unrelated then
I'll reenable the labeler later
Actually if that's not on the v1 tag we aren't using that
And our triage workflow has other actions too
So I'd need to dig into all of them
so this job prints out headers which tells us that:
so once every 5m with 200+ calls per run I think could account for these issues
I'll send a PR to run once an hour
fwiw github turned down how frequent cron actions could be, so it was effectively set at 15 minutes
all that stuff should be on the v1
branch too, fwiw
Last updated: Nov 22 2024 at 17:03 UTC