in the spirit of starting a thread on this, I've noticed notification emails being received very late today, I'm only just now getting notifications in email for stuff that happened hours ago
Continuing in that spirit GH actions and GH as a whole has been slow/buggy this week (and a little of last week)? My emails have been also coming through slowly, but haven't seen actions failures as a symptom just yet
pages are loading slowly and getting intermittent errors for me rn
https://www.githubstatus.com/ says "notifications are delayed" but it seems like its their whole system that is being sluggish
aaand now I'm getting unicorns
git pushes are failing now too
man things fell over fast
Screenshot 2026-02-09 at 10.46.38.jpg
it's like it's christmas!
so colorful!
when I do manage to load a PR or something, it seems like actions are not getting scheduled for the PR at all
going through a VPN with a European exit node might help? https://eu.githubstatus.com/
there was a snow hour over here, too
my US team seems like it's working again?
Till Schneidereit said:
going through a VPN with a European exit node might help? https://eu.githubstatus.com/
routing through germany I'm getting extremely slow git pushes right now as well as unicorns -- my guess is this is more of a backend thing than a frontend
everything got green for a bit and it's all back to very red
We're not really keeping track per-se, but at some point we're going to cross the threshold of "it would be cheaper to hire someone to maintain self-hosted CI infrastructure"
Three Mondays in a row with major outages; perhaps all BA member companies should adopt 4-day workweeks Tue-Fri ¯\_(ツ)_/¯
Till Schneidereit said:
going through a VPN with a European exit node might help? https://eu.githubstatus.com/
Wouldn't that only help if the bytecodealliance enterprise account itself is a European account?
I'm fairly sure there was an underlying resource failure of some sort; it absolutely went out here in the EU as well, but was back up in about 15 minutes or so......
FWIW, all of gh is on a stability freeze -- no new rollouts or config changes of any sort -- to fully understand and rectify what happened particularly this week.
Screenshot 2026-02-10 at 09.34.58.jpg
so it begins anew...
good god
are you running again, or is it STILL there?
I have done much GitHub myself this morning and the page is all green now so hopefully fine...
Screenshot 2026-02-11 at 10.17.07.jpg
Another day, more errors. I'm seeing a lot of delayed notifications this morning as well as a lot of spurious failures in this CI run
all I can do here is listen to the pain and pass it along to Ben
Oh that's understandable yeah, this is primarily a heads-up channel for us so we can share what we're seeing and be aware of outages/problems on our end
totes git it; I'm just letting you know that I'm backchanneling but also that I can't do more than that
that's also much appreciated too!
Maybe we can get the CEO of GraphQL on the phone
(apologies, couldn't resist)
hey, any port in a storm, right?
as it happens, the ex CEO of GH is starting his own new GH, so maybe we can all move there while they don't charge anything? :-)
meanwhile, the poor pm who has to deal with all this from customers:
image.png
due diligence: he IS kidding, painfully
We've talked about retries and such before, but here's an example of an exponential backoff and it just fails every time...
Could be hitting the rate limit for unauthenticated requests...
Could try using the gh CLI which can download via authenticated API calls, e.g. for the example you linked this seems to work: gh release download --repo bytecodealliance/wasm-tools wasm-tools-1.0.27 -p wasm-tools-1.0.27-x86_64-linux.tar.gz
I believe gh is preinstalled for standard actions runners but it might require a bit more config to make it authenticate as the action: https://docs.github.com/en/actions/tutorials/authenticate-with-github_token#example-1-passing-the-github_token-as-an-input
Alternatively: https://github.com/marketplace/actions/release-downloader
It looks like each attempt there downloads ~55kB then stalls -- I'd expect a rate limit to immediately return a 429 or 500 or whatever. Looks like maybe a CDN/cache problem as each download stalls at the same chunk? In any case, points more to "flaky platform" than "problem that we can solve easily" IMHO
we could also try to cache tool downloads, so we presumably at least are closer to the storage the bits come from, and they all come from the same storage?
Last updated: Feb 24 2026 at 04:36 UTC