I'm frustrated with GitHub's stability, too, but we should be clear that GitHub wasn't down. They're one of the more honest services when it comes to posting service degradations, unlike some other platforms where teams will resist updating the status page until it's an undeniable site-wide outage.
> We are working with our compute provider to alleviate elevated queue times and failures for Actions Jobs running on Hosted Runners in the East US region affecting 10% of runs. Hosted Runners with private networking can fail over to a different Region to mitigate the issue.
> We are investigating elevated queue times on Actions Jobs running on Standard Hosted Runners in East US affecting 10% of runs
Frustrating if you were impacted, but from the comments here you'd think the entire site was down again.
They're being forced to be less dishonest. They're not going to count this as downtime anyways. They're reporting 100% uptime for April: https://www.githubstatus.com/
I gave you an upvote, since you'll probably be downvoted into oblivion soon :-D
All those "GitHub is dead" and "GitHub will crumble soon" cries are massively exaggerating things. Most users probably won't even notice these "issues" or "downtimes", since they only affect certain parts of the site.
Sure, it's annoying if you're impacted, but that still doesn't spell doom for GitHub as an organization
GitHub's COO shared this under-reported X post last month [1] on their exponential growth. I'd love to see more proactive messaging on their growth rate / vision for agentic interactions.
> … platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)
> GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.
I'm using GitHub for my business and so do millions more. Might be time to prioritize paying customers, historically popular open source repos and PRs created by known human actors. Agents can wait, humans have much less patience.
I don't know how art sites are handling AI. The cost to create has gone done multiple orders of magnitude. People are pushing more content into online services than ever before, and the rate is only increasing.
Their headline figure is a bit exaggerated, it's driven from the official status numbers, but aggregates across all GH services.
Imagine you run 365 services, and each goes down 1 day a year.
If those are all on the same day, this would report you having 99.7% uptime.
If instead, each service goes down 1 day per year but on different days, this would report you having 0% uptime.
Despite the same actual downtime for any given service.
The truth is somewhere in the middle, that github has run degraded for a significant amount of time.
But I don't think it is fair to take an incident like this one[1], where 5% of requests were incorrectly denied authorisation, and count it the same as you would the whole of github being down.
IT seems to be quoting incident reports for the duration of each outage, so there is accountability in terms of being able to verify all the details of what they are counting.
> and why it's sooo different from gh's official status numbers?
Maybe this is counting any period with any service showing any level of issue as a complete fail, and the official numbers are cherry-picking a bit (only counting core services? not counting significant performance issues that the other count does because things were working, just v…e…r…y … s…l…o…w…l…y) or averaging values (so 75% services running at a given time looks ¼ as bad in their figures), the two sets of calculations could be done with a different granularity, …
In other words: lies, damned lies, and statistics!
The only way to know is to know how both are calculated in detail, and that information might not be readily available.
1. This one counts downtime from any service, so if anything is down or degraded they count it as 100% down, which is harsh.
2. Github is doing some classic big org sneaky things where they don't count degraded service fully. So if github actions is partially down for most people in a away that makes you say "github is down", there's a good chance that microsoft doesn't count that or counts it partially instead.
> Github is doing some classic big org sneaky things where they don't count degraded service fully.
Even worse example is the Travis CI. For more than a year their CI jobs sometimes get stuck or do not start for days, and, surprise-surprise, it's never shown at their status page[1] - always green. We would switch to something else entirely if not the unique offering of PowerPC and SystemZ servers/runners. Apart from that - it's the worst CI service I used so far.
Honestly I kind of feel for them - their traffic must be spiking like no tomorrow. Though still annoying when the team takes down time because GitHub is down...has anyone transitioned to something else successfully?
I'm a DevOps freelancer and I've moved projects off GitHub Actions in prior years (cost and security driven). Everyone uses GitHub a little differently, so there isn't a single migration path. It seems like all parts of GitHub are on fire now, but I'd generally recommend moving in stages.
For my personal work I did a hard cutover to GitLab last month. The issues import is the most complex part as the default import messes up issue authors.
This is 100% AI's fault. It is a mix of more commits coming in, most likely code quality degradation, and I would not be surprised if capex that could be used to help with load is going towards AI instead
If everyone is vibecoding, and SaaS plays have no moats anymore, and everyone says they are mad at Github's reliability...why aren't there like 10 viable replacements already?
- There are already viable GitHub replacements, like Codeberg, Bitbucket, Gitlab, etc. Everyone stays on Github for network effects, not because of the superior product. You can't vibe code network effects.
- And yes, GitHub is a massive product with like 50 different huge features. No reasonable person would say you can trivially vibecode that. Vibecoding would still make it easier. I feel this argument is a bit silly, no? "Ah, you can't vibecode GitHub in a weekend? That proves vibecoding was a mirage!" Surely even the most fervent anti-AI skeptic must admit there must be some middle ground between "a mirage" and "can literally replace millions of man-hours of work".
Everyone already has an account, so there's no friction to opening up issues, adding thumbs up to issues, using the discussion forum, etc. And while I think it's pretty silly, a lot of people take "10k stars on GitHub" to be a positive signal, and you can only get there when you have 10k people willing to star on your platform.
My theory is that vibecoded replacements haven't succeeded for the same reason why GitHub's quality has declined: because vibecoding/AI software development isn't as efficient as believed when measuring real-world outcomes.
I have yet to even see anyone claim that software can't constitute a moat anymore but I expect that there are people saying that. GitHub has a huge non-software moat in the form of network effects, brand recognition, and good will.
The hardest part isn't making a "forge", it's making money off of making a forge. Getting a sufficiently large number of paying customers.
If GitHub doesn't get their quality issues under control someone probably will manage to breach that moat and take over the market. It's not like there's a lack of competitors (Pre-llm: GitLab, BitBucket, Gitea, Source Hut, etc. Post LLM: Tangled, esrc is promising something any day now. Probably more in both camps that don't come to mind).
>> SaaS plays have no moats anymore
> I have yet to see literally anyone say this.
I've heard a lot of people say this...including myself after a root beers. I think you just have to look to any time an AI feature is announced and some related companies stock price crumbles. Just google something like "stock price tumbles after anthropic announces" or something like that.
Because everyone wants the fake internet points (sorry, stars) to mention on their CV.
Because there are already a number of viable alternatives, them not being chosen has nothing to do with AI coding but other factors like market momentum & network effects and familiarity. They are used, just much less so. If there are already good alternatives, why would anyone vibe code a new one any more than they would write a new one manually? Forges are not sexy stuff, and the existence of numerous decent free ones means that you aren't going to be able to sell a new one in any way (paid accounts, stalking/advertising, …) at least not until it has a significant following and that is unlikely to happen because of the reasons above.
People not wanting to use github (or one of the common alternatives that already exist) are more likely to just use git as-is, and other tolls as needed for issue tracking, CI, etc, than to create a new forge.
Locally, I'm using gitolite+cgit. I was previously using Gitea, but that didn't suit my requirements.
I'm using GitHub for my open source projects as:
1. While GitHub Actions has its issues and doesn't work for everyone, I've found it easy to build and test an IntelliJ plugin against multiple IntelliJ versions.
2. I don't have to pay for and manage the hosting of the git repository.
GitHub outages and stuff don’t really affect me, so I have no great reason to leave. But I have good reason to stay, because that’s where everyone else is already.
It's a great question. I'm assuming it is due to devs being too lazy to fight the momentum. Go ahead an switch services if five-nines uptime is critical to your codebase. The daily HN complaint isn't going to move mountains for you... oh, maybe next outage then, if you're not too busy? Right..
What do you mean? like it doesn't doesn't know how to perform a actions on it like it does with the gh cli? fwiw in a different comment i cited gh cli X claude code as one of the reasons I still github.
Claude Code in the CLI works fine. I mean if you want to use the Claude Code web interface (https://claude.ai/code), the literal first step is connect to github.
i have a lot of sway over what git+cicd system my corporate overlord uses. As I am very Github alternative curious right now, if anyone is pushing alternative git+cicd stack, I want to hear it.
That's not what they mean. That would still be a moat, just for OpenAI instead of Microsoft. They mean anyone who wants a github (eventually) can just tell their own ai to make them a github on the spot.
Github is struggling because of compute, which comes from everyone vibecoding and triggering actions 10x more.
I can vibecode an alternative, but once i have users, who is going to secure this amount of compute? Compute+talent to manage it(devops isnt vibecoded *yet) is a moat
Why care about making a commercial alternative? I just want a “forge” for my team that looks like a combination of GitHub and tangled (stacked PRs and JJ).
So I’m working on waza.sh. I have NO intent on making it commercial, nor open-source (unless someone wants to collab on it).
Will it need compute? Yes! For my team only. So a dedicated box at OVH/Hetzner for 30eu is more than sufficient.
Another reason is Github Actions. For all of it's problems ( supply chain, reliability issues...to name a few ), it's tight integration with Github (not git) events is pretty great (when it works).
How is the company morale at github? I imagine it to be really very depressing working at github right now with all these downtime. I know github is being used by AI agents like there is no tomorrow but there have also been en-shittification attempts at github and I have seen some comments tell me that github can be more efficient.
So what are people who are at github doing right now? Like what do the priorities look like?
Once again a reminder for people to look at codeberg. An Uptime of 84.88% is just not acceptable Github. I don't think that Github can come out of this personally. This problem has gone for too long and has become too large for people to ignore.
I'm frustrated with GitHub's stability, too, but we should be clear that GitHub wasn't down. They're one of the more honest services when it comes to posting service degradations, unlike some other platforms where teams will resist updating the status page until it's an undeniable site-wide outage.
> We are working with our compute provider to alleviate elevated queue times and failures for Actions Jobs running on Hosted Runners in the East US region affecting 10% of runs. Hosted Runners with private networking can fail over to a different Region to mitigate the issue.
> We are investigating elevated queue times on Actions Jobs running on Standard Hosted Runners in East US affecting 10% of runs
Frustrating if you were impacted, but from the comments here you'd think the entire site was down again.
They're being forced to be less dishonest. They're not going to count this as downtime anyways. They're reporting 100% uptime for April: https://www.githubstatus.com/
I gave you an upvote, since you'll probably be downvoted into oblivion soon :-D
All those "GitHub is dead" and "GitHub will crumble soon" cries are massively exaggerating things. Most users probably won't even notice these "issues" or "downtimes", since they only affect certain parts of the site.
Sure, it's annoying if you're impacted, but that still doesn't spell doom for GitHub as an organization
GitHub's COO shared this under-reported X post last month [1] on their exponential growth. I'd love to see more proactive messaging on their growth rate / vision for agentic interactions.
> … platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)
> GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.
[1] https://x.com/kdaigle/status/2040164759836778878
This is the effect of agentic workflows. I know how much faster I've been going since the agents got good. I'm not surprised they're struggling.
All the more reason to donate to Forgejo/Codeberg[1][2] or contribute the code to SourceHut[3].
[1] https://liberapay.com/forgejo
[2] https://donate.codeberg.org/
[3] https://sr.ht/~sircmpwn/sourcehut/
I'm using GitHub for my business and so do millions more. Might be time to prioritize paying customers, historically popular open source repos and PRs created by known human actors. Agents can wait, humans have much less patience.
Why don’t they just raise GitHub actions prices? Supply and demand, that would sort itself
I agree. This is what I keep wondering. Just go back to pro accounts for $10 a month. That would likely sort out a lot of this.
That kinda presumes demand is the problem.
Unless you're suggesting driving demand to zero...
I don't know how art sites are handling AI. The cost to create has gone done multiple orders of magnitude. People are pushing more content into online services than ever before, and the rate is only increasing.
Do Github publish post-mortems?
Incidents spiked recently and I wonder what's repeatedly causing the issue.
Mistakes (preventable or not) can happen, but repeated failures is a systematic issue.
Down to 84.88% uptime: https://mrshu.github.io/github-statuses/
Can't even do three 8s properly.
How reliable is this uptime? and why it's sooo different from gh's official status numbers?
Their headline figure is a bit exaggerated, it's driven from the official status numbers, but aggregates across all GH services.
Imagine you run 365 services, and each goes down 1 day a year.
If those are all on the same day, this would report you having 99.7% uptime.
If instead, each service goes down 1 day per year but on different days, this would report you having 0% uptime.
Despite the same actual downtime for any given service.
The truth is somewhere in the middle, that github has run degraded for a significant amount of time.
But I don't think it is fair to take an incident like this one[1], where 5% of requests were incorrectly denied authorisation, and count it the same as you would the whole of github being down.
[1] https://www.githubstatus.com/incidents/02z04m335tvv
> How reliable is this uptime?
IT seems to be quoting incident reports for the duration of each outage, so there is accountability in terms of being able to verify all the details of what they are counting.
> and why it's sooo different from gh's official status numbers?
Maybe this is counting any period with any service showing any level of issue as a complete fail, and the official numbers are cherry-picking a bit (only counting core services? not counting significant performance issues that the other count does because things were working, just v…e…r…y … s…l…o…w…l…y) or averaging values (so 75% services running at a given time looks ¼ as bad in their figures), the two sets of calculations could be done with a different granularity, …
In other words: lies, damned lies, and statistics!
The only way to know is to know how both are calculated in detail, and that information might not be readily available.
There is a link to the repo to verify the code and explain their process
1. This one counts downtime from any service, so if anything is down or degraded they count it as 100% down, which is harsh.
2. Github is doing some classic big org sneaky things where they don't count degraded service fully. So if github actions is partially down for most people in a away that makes you say "github is down", there's a good chance that microsoft doesn't count that or counts it partially instead.
> Github is doing some classic big org sneaky things where they don't count degraded service fully.
Even worse example is the Travis CI. For more than a year their CI jobs sometimes get stuck or do not start for days, and, surprise-surprise, it's never shown at their status page[1] - always green. We would switch to something else entirely if not the unique offering of PowerPC and SystemZ servers/runners. Apart from that - it's the worst CI service I used so far.
[1] https://www.traviscistatus.com/history
Honestly I kind of feel for them - their traffic must be spiking like no tomorrow. Though still annoying when the team takes down time because GitHub is down...has anyone transitioned to something else successfully?
I'm a DevOps freelancer and I've moved projects off GitHub Actions in prior years (cost and security driven). Everyone uses GitHub a little differently, so there isn't a single migration path. It seems like all parts of GitHub are on fire now, but I'd generally recommend moving in stages.
For my personal work I did a hard cutover to GitLab last month. The issues import is the most complex part as the default import messes up issue authors.
This makes me nostalgic for platforms like https://unfuddle.com. That was the old days of other providers with subversion and git.
It it was a startup, sure, I'd feel for them. This is Microsoft though. They have the money and resources
I should say that I don't feel for the business. The SRE's and admins though I certainly do
This is 100% AI's fault. It is a mix of more commits coming in, most likely code quality degradation, and I would not be surprised if capex that could be used to help with load is going towards AI instead
I re-ask this: https://news.ycombinator.com/item?id=47979968
If everyone is vibecoding, and SaaS plays have no moats anymore, and everyone says they are mad at Github's reliability...why aren't there like 10 viable replacements already?
Why are you still using Github?
- There are already viable GitHub replacements, like Codeberg, Bitbucket, Gitlab, etc. Everyone stays on Github for network effects, not because of the superior product. You can't vibe code network effects.
- And yes, GitHub is a massive product with like 50 different huge features. No reasonable person would say you can trivially vibecode that. Vibecoding would still make it easier. I feel this argument is a bit silly, no? "Ah, you can't vibecode GitHub in a weekend? That proves vibecoding was a mirage!" Surely even the most fervent anti-AI skeptic must admit there must be some middle ground between "a mirage" and "can literally replace millions of man-hours of work".
Why do network effects matter for "in-house" (i.e. corporate/commercial) software work?
Open-source projects, yes I can sort of get it, you want to be where the contributors are (but there are downsides to that also).
What network effects does GitHub have? Every repository is independent. It's like saying GoDaddy has network effects.
Everyone already has an account, so there's no friction to opening up issues, adding thumbs up to issues, using the discussion forum, etc. And while I think it's pretty silly, a lot of people take "10k stars on GitHub" to be a positive signal, and you can only get there when you have 10k people willing to star on your platform.
Forge-jo is super interesting too!
My theory is that vibecoded replacements haven't succeeded for the same reason why GitHub's quality has declined: because vibecoding/AI software development isn't as efficient as believed when measuring real-world outcomes.
Yes, yes, yes. I think this is perfect natural experiment to see if saas moats really have gotten smaller...or if that was just an AI mirage.
> SaaS plays have no moats anymore
I have yet to see literally anyone say this.
I have yet to even see anyone claim that software can't constitute a moat anymore but I expect that there are people saying that. GitHub has a huge non-software moat in the form of network effects, brand recognition, and good will.
The hardest part isn't making a "forge", it's making money off of making a forge. Getting a sufficiently large number of paying customers.
If GitHub doesn't get their quality issues under control someone probably will manage to breach that moat and take over the market. It's not like there's a lack of competitors (Pre-llm: GitLab, BitBucket, Gitea, Source Hut, etc. Post LLM: Tangled, esrc is promising something any day now. Probably more in both camps that don't come to mind).
>> SaaS plays have no moats anymore > I have yet to see literally anyone say this.
I've heard a lot of people say this...including myself after a root beers. I think you just have to look to any time an AI feature is announced and some related companies stock price crumbles. Just google something like "stock price tumbles after anthropic announces" or something like that.
> Why are you still using Github?
Because everyone wants the fake internet points (sorry, stars) to mention on their CV.
Because there are already a number of viable alternatives, them not being chosen has nothing to do with AI coding but other factors like market momentum & network effects and familiarity. They are used, just much less so. If there are already good alternatives, why would anyone vibe code a new one any more than they would write a new one manually? Forges are not sexy stuff, and the existence of numerous decent free ones means that you aren't going to be able to sell a new one in any way (paid accounts, stalking/advertising, …) at least not until it has a significant following and that is unlikely to happen because of the reasons above.
People not wanting to use github (or one of the common alternatives that already exist) are more likely to just use git as-is, and other tolls as needed for issue tracking, CI, etc, than to create a new forge.
Locally, I'm using gitolite+cgit. I was previously using Gitea, but that didn't suit my requirements.
I'm using GitHub for my open source projects as:
1. While GitHub Actions has its issues and doesn't work for everyone, I've found it easy to build and test an IntelliJ plugin against multiple IntelliJ versions.
2. I don't have to pay for and manage the hosting of the git repository.
I run my own Forgejo instance.
I still have a GitHub account I actively use.
GitHub outages and stuff don’t really affect me, so I have no great reason to leave. But I have good reason to stay, because that’s where everyone else is already.
It's a great question. I'm assuming it is due to devs being too lazy to fight the momentum. Go ahead an switch services if five-nines uptime is critical to your codebase. The daily HN complaint isn't going to move mountains for you... oh, maybe next outage then, if you're not too busy? Right..
Claude Code web won't let me use my gitea instance.
What do you mean? like it doesn't doesn't know how to perform a actions on it like it does with the gh cli? fwiw in a different comment i cited gh cli X claude code as one of the reasons I still github.
Claude Code in the CLI works fine. I mean if you want to use the Claude Code web interface (https://claude.ai/code), the literal first step is connect to github.
> Why are you still using Github?
Show me the VCs who are willing to fund the marketing effort that would be needed to conquer the network-effects moat, and I'm in.
Inertia, because 'every one uses it', network effects, integrations
Because my corporate overlord does
i have a lot of sway over what git+cicd system my corporate overlord uses. As I am very Github alternative curious right now, if anyone is pushing alternative git+cicd stack, I want to hear it.
This video suggests OpenAI is actively developing an alternative to GitHub
https://youtu.be/f3u57jkwBFE?si=FJuxZfmc-i7EkPlx
That's not what they mean. That would still be a moat, just for OpenAI instead of Microsoft. They mean anyone who wants a github (eventually) can just tell their own ai to make them a github on the spot.
compute??
Github is struggling because of compute, which comes from everyone vibecoding and triggering actions 10x more.
I can vibecode an alternative, but once i have users, who is going to secure this amount of compute? Compute+talent to manage it(devops isnt vibecoded *yet) is a moat
Why care about making a commercial alternative? I just want a “forge” for my team that looks like a combination of GitHub and tangled (stacked PRs and JJ).
So I’m working on waza.sh. I have NO intent on making it commercial, nor open-source (unless someone wants to collab on it).
Will it need compute? Yes! For my team only. So a dedicated box at OVH/Hetzner for 30eu is more than sufficient.
> Why are you still using Github?
Stars. Grifters have discovered vibe-coding and the ability to buy GitHub stars, followers, etc.
https://duckduckgo.com/?q=buy+GitHub+stars&ia=web
Why do stars and followers matter? Who is looking at them?
One (small, stupid) reason for me is claude code works really well the gh cli.
Another reason is Github Actions. For all of it's problems ( supply chain, reliability issues...to name a few ), it's tight integration with Github (not git) events is pretty great (when it works).
How is the company morale at github? I imagine it to be really very depressing working at github right now with all these downtime. I know github is being used by AI agents like there is no tomorrow but there have also been en-shittification attempts at github and I have seen some comments tell me that github can be more efficient.
So what are people who are at github doing right now? Like what do the priorities look like?
Once again a reminder for people to look at codeberg. An Uptime of 84.88% is just not acceptable Github. I don't think that Github can come out of this personally. This problem has gone for too long and has become too large for people to ignore.
I'm not sure the uptime or UX of codeberg is meaningfully better?
We get it.
We should probably take a break on these. It's probably more newsworthy now when GitHub is "up".