These outages are also a good test of how much local resilience teams actually built. My guess is most shops are much more dependent on GitHub than they like to admit.
I've been on a somewhat binge to move a bunch of stuff to self-hosting at home. Yesterday I finally completed my self-hosted Forgejo instance at home, together with Linux, Windows (via VM) and macOS (via Mac Mini) runners/workers for CI/CD, so everything finally lives in-house (literally), instead of all source code + Actions being on GitHub but the infrastructure actually living locally.
This is probably the first time I felt vindicated with my self-hosting move literally the day after I finished the migration, very pleasant feeling. Usually it takes a month or two before I get here.
And once you start self-hosting, you realise how slow the 'modern' web actually is.
I host forgejo on a single NUC with a bunch of other stuff in Proxmox, the page loads in 6ms!
Immich is not quite as fast but still a ton faster than Google photos.
The idea of a homelab is appealing to me, but then I actually start building one and get tired of it quickly. When I’ve been fixing broken systems at work all day I don’t really want to have to be my own sysadmin too.
I’ve got a nice and powerful Minisforum on my desk that I bought at Christmas not even switched on.
I've tried for 15 years to have my homelab, but always get lost in the complexity after a year or so, in the past. About 3 years ago I gave NixOS a try instead for managing everything, which suddenly made everything easier (counter-intuitively perhaps) as now I can come back after months and still understand where everything is and how it works after just reading.
Setting up Forgejo + runners declaratively is probably ~100 lines in total, and doesn't matter I forget how it works, just have to spend five minutes reading to catch up after I come back in 6 months to change/fix something.
I think the trick to avoid getting tired of it is trying to just make it as simple as humanly possible. The less stuff you have, the easier it gets, at least that's intuitive :)
Just to echo what others are saying: NixOS and Proxmox are the answer.
I run both right now, but I am in the process of just running NixOS on everything.
NixOS really is that good, particularly for homelabs. The module system and ability to share them across machines is really a superpower. You end up having a base config that all machines extend essentially. Same idea applies to users and groups.
One of the other big benefits, particularly for homelabs, is that your config is effectively self-documenting. Every quirk you discover is persisted in a source controlled file. Upgrades are self-documenting too: upstream module maintainers are pretty good about guiding you towards the new way to do things via option and module deprecation.
I mean this in a good way, but I'm slightly chuckling to myself that it reads like people are just discovering IaC...on HN. That's all Nix configs are, at the end of the day.
No matter the tool, manage your environment in code, your life becomes much easier. People start and then get addicted to the ClickOps for the initial hit and then end up in a packed closet with a one way ticket to Narnia.
This happens in large environments too, so not at all just a home lab thing.
Unless you actually need hardware (local LLM host, massive data transformation jobs), it is also easy to get into the many machines trap. A single old laptop, N97, optiplex, etc sitting in a corner is actually a huge amount of computer power that will rival most cloud offerings. Single machine can do so much.
Yeah true. I have an old Asus X550L from 2014, a very budget / mid basic home laptop with the battery removed running as my server. I do some dev on it with VSCode remoting into it and Claude Code, run Jellyfin, Audiobookshelf, Teamspeak, IRC and TS bots, nginx, SyncThing and some static websites.
I'm still usually under 10% cpu usage and at 25% ram usage unless I'm streaming and transcoding with Jellyfin.
It's been fun and super useful. Almost any old laptop from the past 15 years could run and solve several home computing needs with little difficulty.
Yup this is what I've got up and running recently and it's been awesome.
My setup is roughly the following.
- Dell optiplex mini running Proxmox for compute. Unraid NAS for storage.
- Debian VM on the Proxmox machine running Forgejo and Komodo for container management.
- Monorepo in Forgejo for the homelab infrastructure. This lets me give Claude access to just the monorepo on my local machine to help me build stuff out, without needing to give it direct access to any of my actual servers.
- Claude helps me build out deployment pipeline for VMs/containers in Forgejo actions, which looks like:
- Forgejo runner creates NixOS builds => Deploy VMs via Proxmox API => Deploy containers via Komodo API
- I've got separate VMs for
- gateway for reverse-proxy & authentication
- monitoring with prometheus/loki/grafana stack
- general use applications
Since storage is external with NFS shares, I can tear down and rebuild the VMs whenever I need to redeploy something.
All of my docker compose files and nix configs live in the monorepo on Forgejo, so I can use Renovate to keep everything up to date.
Plan files, kanban board, and general documentation live adjacent to Nix and Docker configs in the monorepo, so Claude has all the context it needs to get things done.
I did this because I got tired of using Docker templates on Unraid. They were a great way to get started, but it's hard to pin container versions and still keep them up-to-date (Unraid relies heavily on the `latest` tag). Moving stuff over to this setup bit-by-bit and I've been really enjoying it so far.
Maybe my needs are simpler. But I just made do with systemd services and apt (debian). I've also setup Incus for the occasional software testing and playing around. After using OpenBSD as a daily driver, I'm more keen with creating a native package for the OS/Distro than wrangling docker compose files.
Thanks. Yeah, I've probably been overcomplicating it before. I was running Kubernetes on Talos thinking that at least it would be familiar. Such power tools for running simple workloads on a single node is inviting headaches.
There isn't much work or maintenance to do really. When you are the sole user everything is over sized and if it is only accessible at home you can be lazy with updates and security anyway.
With the help of coding agents it's easier than ever. Just get Claude/Codex to create Helm Charts / Docker Compose files for you. Struggle with some command line juggling to fix some obscure error? An agent can mostly help you in no-time.
I recently did this as well and one of the things that has struck me is just how fast Actions are compared to Github!
That said, I've got Linux and macOS setup with a Mac Mini (using a Claude-generated Ansible task file), but configuring a Windows VM seemed a bit painful. You didn't happen to find anything to simplify the deployment process here, did you?
> You didn't happen to find anything to simplify the deployment process here, did you?
No, unfortunately not, the Windows VM setup + Forgejo Windows runner was the most painful thing for me to setup, no doubt. It's just such a hassle to reliably set things up, even getting logs out of it was trouble... To be fair, my Mac Mini was manually setup at first, then I have Nix on top of it, while Windows I've 100% automated it, so not entirely fair comparison, automating the Mac Mini setup would be similarly harsh I think. But it's a mix-match of Nix for configuring the VM and booting it, XML files for "autounattend" setup, ps1 bootstrapping scripts and .cmd script for finalizing, a big mess.
My Raspberries (and OrangePi) have better availability than github, and if were to be down I'd be out of power/internet and wouldn't be able to work much anyway.
The only problem I've found with Forgejo is a lack of fine grained permissions and also the lack of an API for pulling action invocations. The actions log api endpoints are present in gitea from what I can tell.
I self-host Forgejo for personal and indie-startup purposes, and like it well enough.
The downside with that is it misses one of the key purposes of GitHub: posturing for job-hunting/hopping. It's another performative checkbox, like memorizing Leetcode and practicing delivery for brogrammer interviews.
If you don't appear active on GitHub specifically (not even Codeberg, GitLab, nor something else), you're going to get dismissed from a lot of job applications, with "do you even lift, bro" style dissing, from people who have very simple conceptions of what software engineers do, and why.
There is a fairly straightforward feature in Forgejo to sync your repos to Github, if that's what you want to do. It's not perfect, of course, but should help to advertise your projects and keep your activity heatmap green.
I mostly use Forgejo for my private repos, which are free at Github, but with many limitations. One month I burned all my private CI tokens on the 1st due to a hung Mac runner. Love not having to worry about this now!
> If you don't appear active on GitHub specifically... you're going to get dismissed from a lot of job applications
Sometimes wonder if my coursemates back in the days, who automated commits to private repos just to keep the green box packed, actually got any mileage out of it.
I get that. To counter it I usually try to have at least one public repo on my Forgejo instance and link to that on my resume/LinkedIn. It helps that I'm angling for security/infra positions so the self-hosting aspect actually helps but even without that I would imagine it signals something. Maybe not ideal for the most mainstream jobs (whatever that even means...), but I suspect some people will be intrigued by the initiative.
Edit: to the "do you even lift bro", the response becomes "yeah man, I've built my own gym - oh, you go to Planet Fitness? Good luck."
Instability aside I found several things about GitHub awkward, annoying, or missing features so I spent a month building my own. I think we're going to be seeing a lot more of this.
Interesting. I speculated not long ago that Microsoft is really taking a dive here, and other companies may look to provide better alternatives to GitHub, as one idea. Today I read your comment about self-hosting here; while that is not quite what I compared or had in mind, it is interesting to read about it, of people who go that route. Microsoft is really putting themselves into trouble in the last year or two. Some things no longer work, so much is clear here.
https://mrshu.github.io/github-statuses/ says they are down to 88.15% uptime. Even when you consider uptime of individual components, their best is 99.78%, so two nines.
We have pretty basic needs - git repos + actions - and a bit of downtime here and there doesn't really affect us too much because we're not constantly committing and deploying, but even we're looking around for alternatives now.
Also, looks like people might be pummelling the SourceHut servers looking for an alternative: https://sr.ht/ is down. (Edit: was down when I wrote that, back up now).
The intersection of uptime across every possible service they offer isn't a particularly great metric. I get the point that they are doing badly, but it makes it look worse than I think it really is.
What I would like to see is a combined uptime for "code services", basically Git+Webhooks+API+Issues+PRs, which corresponds to a set of user workflows that really should be their bread & butter, without highlighting things you might not care about (Codespaces, Copilot).
A service's availability is capped by its critical dependencies; this is textbook SRE stuff (see Treynor et al., The Calculus of Service Availability). Copilot may well be on the side of it (and has the worst uptime, dragging everything down), but if Actions depends on Packages then Actions can be "up" while in reality the service is not functional. If your release pipeline depends on Webhooks, then you're unable to release.
The obvious one is git operations: if you don't have git ops then basically everything is down.
So; you're right about Copilot, but the subset you proposed (Git+Webhooks+API+Issues+PRs) has the exact same intersection problem. If git is at one nine, that entire subset is capped at one nine too, no matter how green the rest of it looks.
And to be clear: git operations is sitting at 98.98% on the reconstructed dashboard linked above[1]. That is one nine. Github stopped publishing aggregate numbers on their own status page, which.. tells you something.
Well yes you could do that on a status page, but it's basically just lying to put Actions as green if it's actually down because it depends on Packages which is red.
With that set, I wasn't proposing a set of totally independent services to be grouped together, I was talking about a set of things that I think represent pretty core services for Github users. If Git is dragging the rest of those down, fine; PRs are useless without it. In fact it is worse than some but it's not the worst of that group, and it is still a lot better then the dregs of Actions and Copilot.
Having said that, the numbers are of course terrible, two nines on a couple of things and one on everything else would be bad for a startup, it's an utter embarrassment for a company that's been doing this over a decade.
also I never had considered that breaking your up-time into a bunch of different components is just a strategy to make your SRE look better than it actually is. The combined up-time tells the real story (88%!). Thanks for the link
The number of nines assigned to a suite of services is not indicative of the quality of SRE at any given company, but rather a reflection of the tradeoffs a business has decided to make. Guaranteed there's a dashboard somewhere at Github looking at platform stickiness vs. reliability and deciding how hard to let teams push on various initiatives.
ya i was just doing the math on their chart for the git operations. I added up 14.93 hours combined hours, which puts them WAY lower than the reported 99.7 metric they show right next to it.
So based on their own reporting, the uptime number should be 99.31. Which means only like 6 additional hours and they'd fall below 99.0%
So... three incidents today, all of them ~1h or longer, and everything's green for the day with "no recorded downtime".
These don't really look any different than past incidents which have red bars on their respective days, except maybe that those tended to be several hours.
What do the green bars even mean? Are they changed to non-green retroactively if people complain enough or something? So far as I can tell, literally none of the previous green days have any incident shown in the mouse-over, but there are multiple for today only, so I kinda have to assume the mouse-overs are conveniently "forgotten" or all incidents become non-green and they just don't bother informing anyone on the same day. Either seems intentionally misleading.
> We have resolved a regression present when using merge queue with either squash merges or rebases. If you use merge queue in this configuration, some pull requests may have been merged incorrectly between 2026-04-23 16:05-20:43 UTC.
We had ~8 commits get entirely reverted on our default branch during this time. I've never seen a github incident quite this bad.
Once that 10x developer velocity from AI kicks in, I'm sure github stability improves. Did you know AI finally makes it economical to fix all the little bugs?
I definitely have better uptime hosting my own gitea instance. It's faster too. It's basically a knock off GitHub. Plus with privacy concerns, I'm just happier overall. Easy setup, all I did was deploy the helm chart.
I wondered. We'd seen for most of today that Actions were slow to trigger, I had at least one that was just missed, it felt like something was definitely off but the status was green all day until this.
Just cancelled my GitHub Copilot Pro+ year subscription. Removal of Opus 4.6 stung, but the repeated continued downtime makes it unusable for me. Very disappointed.
No fuss instant refund of my unused subscription (£160) appreciated.
I paid 390 USD for a year Pro+ subscription in November 2025.
I used all the 'Premium Requests' every month on (mainly) Opus 4.5 & 4.6. From what I've read on here it seems I was probably a rather unprofitable customer - it felt like a steal.
Yes, it was definitely a good value for devs using those models. I was hoping since Github Copilot was rarely talked about compared to the Anthropic/OpenAI offerings, MS would continue to subsidize it to encourage people to move over, but maybe it just got too expensive.
Some of my jobs are completing, some are failing. Seems to be random. Kind of wish they would just fail outright, instead of running for 10 minutes and then failing.
what are the good alternatives available for github i find some alternative but as long as widely people use github i cant use other service right like i cant share my alternative to other developer and force him to use this for me. so i feel like i locked in even i want to move i can't
codeberg.org is a thing, and it's perfectly suited for open source projects. Many neovim plugins and home lab tech I use are hosted on codeberg with no issues. If you just want to github as social media, you will never be happy.
Huh? Why not? Say "My git repository is here $URL" then if they want to visit and/or clone it, they'll do that, otherwise don't, why does it matter?
Sure, if you're out after reaching the most people, gaining stars or otherwise try to attract "popularity" rather than just sharing and collaborate on code, then I'd understand what you mean. But then I'd begin with questioning your motivation first, that'll be a deeper issue than what SCM platform you use.
I am this > < close to just running Gogs or Forgejo on some Hetzner boxes, quit my job, charge people for access. Why aren't there like 10 startups doing this yet? Please? I want to give you my money. Just give me a git host that doesn't suck. (All the current ones suck)
Codeberg and Sourcehut are doing it for free, for open source. Corporate probably won't ever move off Github, because they need the prestige of using Github - the actual service quality is completely irrelevant. This is an aspect of the enshittocene epoch - I repeat, quality is irrelevant to corporates.
Seems like outages are increasingly more frequent nowadays. Obviously, this is not the best state of affairs, and developers should not be limited by services. In the meantime I've been experimenting with building third spaces for people to chill while they wait for the services they are dependent on to go back up.
The first one I've built is a little ASCII hangout for Claude @ https://clawdpenguin.com but threads like this make me want to build it for Github too.
These outages are also a good test of how much local resilience teams actually built. My guess is most shops are much more dependent on GitHub than they like to admit.
I've been on a somewhat binge to move a bunch of stuff to self-hosting at home. Yesterday I finally completed my self-hosted Forgejo instance at home, together with Linux, Windows (via VM) and macOS (via Mac Mini) runners/workers for CI/CD, so everything finally lives in-house (literally), instead of all source code + Actions being on GitHub but the infrastructure actually living locally.
This is probably the first time I felt vindicated with my self-hosting move literally the day after I finished the migration, very pleasant feeling. Usually it takes a month or two before I get here.
And once you start self-hosting, you realise how slow the 'modern' web actually is.
I host forgejo on a single NUC with a bunch of other stuff in Proxmox, the page loads in 6ms! Immich is not quite as fast but still a ton faster than Google photos.
The idea of a homelab is appealing to me, but then I actually start building one and get tired of it quickly. When I’ve been fixing broken systems at work all day I don’t really want to have to be my own sysadmin too.
I’ve got a nice and powerful Minisforum on my desk that I bought at Christmas not even switched on.
I've tried for 15 years to have my homelab, but always get lost in the complexity after a year or so, in the past. About 3 years ago I gave NixOS a try instead for managing everything, which suddenly made everything easier (counter-intuitively perhaps) as now I can come back after months and still understand where everything is and how it works after just reading.
Setting up Forgejo + runners declaratively is probably ~100 lines in total, and doesn't matter I forget how it works, just have to spend five minutes reading to catch up after I come back in 6 months to change/fix something.
I think the trick to avoid getting tired of it is trying to just make it as simple as humanly possible. The less stuff you have, the easier it gets, at least that's intuitive :)
Just to echo what others are saying: NixOS and Proxmox are the answer.
I run both right now, but I am in the process of just running NixOS on everything.
NixOS really is that good, particularly for homelabs. The module system and ability to share them across machines is really a superpower. You end up having a base config that all machines extend essentially. Same idea applies to users and groups.
One of the other big benefits, particularly for homelabs, is that your config is effectively self-documenting. Every quirk you discover is persisted in a source controlled file. Upgrades are self-documenting too: upstream module maintainers are pretty good about guiding you towards the new way to do things via option and module deprecation.
I mean this in a good way, but I'm slightly chuckling to myself that it reads like people are just discovering IaC...on HN. That's all Nix configs are, at the end of the day.
No matter the tool, manage your environment in code, your life becomes much easier. People start and then get addicted to the ClickOps for the initial hit and then end up in a packed closet with a one way ticket to Narnia.
This happens in large environments too, so not at all just a home lab thing.
Unless you actually need hardware (local LLM host, massive data transformation jobs), it is also easy to get into the many machines trap. A single old laptop, N97, optiplex, etc sitting in a corner is actually a huge amount of computer power that will rival most cloud offerings. Single machine can do so much.
Yeah true. I have an old Asus X550L from 2014, a very budget / mid basic home laptop with the battery removed running as my server. I do some dev on it with VSCode remoting into it and Claude Code, run Jellyfin, Audiobookshelf, Teamspeak, IRC and TS bots, nginx, SyncThing and some static websites.
I'm still usually under 10% cpu usage and at 25% ram usage unless I'm streaming and transcoding with Jellyfin.
It's been fun and super useful. Almost any old laptop from the past 15 years could run and solve several home computing needs with little difficulty.
Yup this is what I've got up and running recently and it's been awesome.
My setup is roughly the following.
- Dell optiplex mini running Proxmox for compute. Unraid NAS for storage.
- Debian VM on the Proxmox machine running Forgejo and Komodo for container management.
- Monorepo in Forgejo for the homelab infrastructure. This lets me give Claude access to just the monorepo on my local machine to help me build stuff out, without needing to give it direct access to any of my actual servers.
- Claude helps me build out deployment pipeline for VMs/containers in Forgejo actions, which looks like:
- I've got separate VMs for Since storage is external with NFS shares, I can tear down and rebuild the VMs whenever I need to redeploy something.All of my docker compose files and nix configs live in the monorepo on Forgejo, so I can use Renovate to keep everything up to date.
Plan files, kanban board, and general documentation live adjacent to Nix and Docker configs in the monorepo, so Claude has all the context it needs to get things done.
I did this because I got tired of using Docker templates on Unraid. They were a great way to get started, but it's hard to pin container versions and still keep them up-to-date (Unraid relies heavily on the `latest` tag). Moving stuff over to this setup bit-by-bit and I've been really enjoying it so far.
Maybe my needs are simpler. But I just made do with systemd services and apt (debian). I've also setup Incus for the occasional software testing and playing around. After using OpenBSD as a daily driver, I'm more keen with creating a native package for the OS/Distro than wrangling docker compose files.
Thanks. Yeah, I've probably been overcomplicating it before. I was running Kubernetes on Talos thinking that at least it would be familiar. Such power tools for running simple workloads on a single node is inviting headaches.
> When I’ve been fixing broken systems at work all day I don’t really want to have to be my own sysadmin too.
There’s only one solution to this.
Quit your job.
There isn't much work or maintenance to do really. When you are the sole user everything is over sized and if it is only accessible at home you can be lazy with updates and security anyway.
With the help of coding agents it's easier than ever. Just get Claude/Codex to create Helm Charts / Docker Compose files for you. Struggle with some command line juggling to fix some obscure error? An agent can mostly help you in no-time.
I recently did this as well and one of the things that has struck me is just how fast Actions are compared to Github!
That said, I've got Linux and macOS setup with a Mac Mini (using a Claude-generated Ansible task file), but configuring a Windows VM seemed a bit painful. You didn't happen to find anything to simplify the deployment process here, did you?
> You didn't happen to find anything to simplify the deployment process here, did you?
No, unfortunately not, the Windows VM setup + Forgejo Windows runner was the most painful thing for me to setup, no doubt. It's just such a hassle to reliably set things up, even getting logs out of it was trouble... To be fair, my Mac Mini was manually setup at first, then I have Nix on top of it, while Windows I've 100% automated it, so not entirely fair comparison, automating the Mac Mini setup would be similarly harsh I think. But it's a mix-match of Nix for configuring the VM and booting it, XML files for "autounattend" setup, ps1 bootstrapping scripts and .cmd script for finalizing, a big mess.
My Raspberries (and OrangePi) have better availability than github, and if were to be down I'd be out of power/internet and wouldn't be able to work much anyway.
The only problem I've found with Forgejo is a lack of fine grained permissions and also the lack of an API for pulling action invocations. The actions log api endpoints are present in gitea from what I can tell.
Forgejo 15 was just released last week with repo-specific access tokens. More to come in the future.
I moved my forge to my home, outside of a little stress getting all the containers wrangled it was pretty effortless to setup Forgejo.
I do need a good backup solution though, that’s one thing I’m missing.
I self-host Forgejo for personal and indie-startup purposes, and like it well enough.
The downside with that is it misses one of the key purposes of GitHub: posturing for job-hunting/hopping. It's another performative checkbox, like memorizing Leetcode and practicing delivery for brogrammer interviews.
If you don't appear active on GitHub specifically (not even Codeberg, GitLab, nor something else), you're going to get dismissed from a lot of job applications, with "do you even lift, bro" style dissing, from people who have very simple conceptions of what software engineers do, and why.
There is a fairly straightforward feature in Forgejo to sync your repos to Github, if that's what you want to do. It's not perfect, of course, but should help to advertise your projects and keep your activity heatmap green.
I mostly use Forgejo for my private repos, which are free at Github, but with many limitations. One month I burned all my private CI tokens on the 1st due to a hung Mac runner. Love not having to worry about this now!
or you can just have two remotes and push to both sites and enjoy git's distributed nature
> If you don't appear active on GitHub specifically... you're going to get dismissed from a lot of job applications
Sometimes wonder if my coursemates back in the days, who automated commits to private repos just to keep the green box packed, actually got any mileage out of it.
I get that. To counter it I usually try to have at least one public repo on my Forgejo instance and link to that on my resume/LinkedIn. It helps that I'm angling for security/infra positions so the self-hosting aspect actually helps but even without that I would imagine it signals something. Maybe not ideal for the most mainstream jobs (whatever that even means...), but I suspect some people will be intrigued by the initiative.
Edit: to the "do you even lift bro", the response becomes "yeah man, I've built my own gym - oh, you go to Planet Fitness? Good luck."
Self hosting was the correct solution.
6 years early [0] and you have better uptime than GitHub.
[0] https://news.ycombinator.com/item?id=22867803
Instability aside I found several things about GitHub awkward, annoying, or missing features so I spent a month building my own. I think we're going to be seeing a lot more of this.
Interesting. I speculated not long ago that Microsoft is really taking a dive here, and other companies may look to provide better alternatives to GitHub, as one idea. Today I read your comment about self-hosting here; while that is not quite what I compared or had in mind, it is interesting to read about it, of people who go that route. Microsoft is really putting themselves into trouble in the last year or two. Some things no longer work, so much is clear here.
https://mrshu.github.io/github-statuses/ says they are down to 88.15% uptime. Even when you consider uptime of individual components, their best is 99.78%, so two nines.
I see Microsoft mandated AI is doing wonders. For self hosters and Linux enthusiasts.
We have pretty basic needs - git repos + actions - and a bit of downtime here and there doesn't really affect us too much because we're not constantly committing and deploying, but even we're looking around for alternatives now.
Also, looks like people might be pummelling the SourceHut servers looking for an alternative: https://sr.ht/ is down. (Edit: was down when I wrote that, back up now).
tangled.org maybe?
It would be wild if they dropped below the "two 9's" metric. I think they would need an additional ~16hr of outage in the 90 day rolling period.
https://mrshu.github.io/github-statuses/ suggests that their combined uptime doesn't even meet 1 nine, let alone 2.
The intersection of uptime across every possible service they offer isn't a particularly great metric. I get the point that they are doing badly, but it makes it look worse than I think it really is.
What I would like to see is a combined uptime for "code services", basically Git+Webhooks+API+Issues+PRs, which corresponds to a set of user workflows that really should be their bread & butter, without highlighting things you might not care about (Codespaces, Copilot).
Depends how integrated those features are.
A service's availability is capped by its critical dependencies; this is textbook SRE stuff (see Treynor et al., The Calculus of Service Availability). Copilot may well be on the side of it (and has the worst uptime, dragging everything down), but if Actions depends on Packages then Actions can be "up" while in reality the service is not functional. If your release pipeline depends on Webhooks, then you're unable to release.
The obvious one is git operations: if you don't have git ops then basically everything is down.
So; you're right about Copilot, but the subset you proposed (Git+Webhooks+API+Issues+PRs) has the exact same intersection problem. If git is at one nine, that entire subset is capped at one nine too, no matter how green the rest of it looks.
And to be clear: git operations is sitting at 98.98% on the reconstructed dashboard linked above[1]. That is one nine. Github stopped publishing aggregate numbers on their own status page, which.. tells you something.
[1]: https://mrshu.github.io/github-statuses/
Well yes you could do that on a status page, but it's basically just lying to put Actions as green if it's actually down because it depends on Packages which is red.
With that set, I wasn't proposing a set of totally independent services to be grouped together, I was talking about a set of things that I think represent pretty core services for Github users. If Git is dragging the rest of those down, fine; PRs are useless without it. In fact it is worse than some but it's not the worst of that group, and it is still a lot better then the dregs of Actions and Copilot.
Having said that, the numbers are of course terrible, two nines on a couple of things and one on everything else would be bad for a startup, it's an utter embarrassment for a company that's been doing this over a decade.
also I never had considered that breaking your up-time into a bunch of different components is just a strategy to make your SRE look better than it actually is. The combined up-time tells the real story (88%!). Thanks for the link
The number of nines assigned to a suite of services is not indicative of the quality of SRE at any given company, but rather a reflection of the tradeoffs a business has decided to make. Guaranteed there's a dashboard somewhere at Github looking at platform stickiness vs. reliability and deciding how hard to let teams push on various initiatives.
ya i was just doing the math on their chart for the git operations. I added up 14.93 hours combined hours, which puts them WAY lower than the reported 99.7 metric they show right next to it.
So based on their own reporting, the uptime number should be 99.31. Which means only like 6 additional hours and they'd fall below 99.0%
GitHub is going for “eight 8’s” at this rate.
So... three incidents today, all of them ~1h or longer, and everything's green for the day with "no recorded downtime".
These don't really look any different than past incidents which have red bars on their respective days, except maybe that those tended to be several hours.
What do the green bars even mean? Are they changed to non-green retroactively if people complain enough or something? So far as I can tell, literally none of the previous green days have any incident shown in the mouse-over, but there are multiple for today only, so I kinda have to assume the mouse-overs are conveniently "forgotten" or all incidents become non-green and they just don't bother informing anyone on the same day. Either seems intentionally misleading.
There was another really bad incident today: https://www.githubstatus.com/incidents/zsg1lk7w13cf
> We have resolved a regression present when using merge queue with either squash merges or rebases. If you use merge queue in this configuration, some pull requests may have been merged incorrectly between 2026-04-23 16:05-20:43 UTC.
We had ~8 commits get entirely reverted on our default branch during this time. I've never seen a github incident quite this bad.
Don't worry, status page says that it's 100% working - green color, all good. even though i can't access a static page
Once that 10x developer velocity from AI kicks in, I'm sure github stability improves. Did you know AI finally makes it economical to fix all the little bugs?
I moved to Gitlab a while ago. It's a whole new level of freedom not having to pay for self-hosted CI runners.
Well I suppose they are finding out if you lay off too many people the IP of how the system works goes out the door with them.
I definitely have better uptime hosting my own gitea instance. It's faster too. It's basically a knock off GitHub. Plus with privacy concerns, I'm just happier overall. Easy setup, all I did was deploy the helm chart.
I wondered. We'd seen for most of today that Actions were slow to trigger, I had at least one that was just missed, it felt like something was definitely off but the status was green all day until this.
Just cancelled my GitHub Copilot Pro+ year subscription. Removal of Opus 4.6 stung, but the repeated continued downtime makes it unusable for me. Very disappointed.
No fuss instant refund of my unused subscription (£160) appreciated.
What will you use now?
Claude Code
Doesn’t GitHub Copilot Pro+ only have month-to-month payment option?
Only Pro (without plus) can be paid annually for some reason.
Pro+ does have a annual plan but recently they paused or dropped the annual plans because they are trying to adjust the pricing model.
I paid 390 USD for a year Pro+ subscription in November 2025.
I used all the 'Premium Requests' every month on (mainly) Opus 4.5 & 4.6. From what I've read on here it seems I was probably a rather unprofitable customer - it felt like a steal.
Yes, it was definitely a good value for devs using those models. I was hoping since Github Copilot was rarely talked about compared to the Anthropic/OpenAI offerings, MS would continue to subsidize it to encourage people to move over, but maybe it just got too expensive.
Some of my jobs are completing, some are failing. Seems to be random. Kind of wish they would just fail outright, instead of running for 10 minutes and then failing.
At this point it'll be better to have alerts for when GitHub is online, rather than offline.
what are the good alternatives available for github i find some alternative but as long as widely people use github i cant use other service right like i cant share my alternative to other developer and force him to use this for me. so i feel like i locked in even i want to move i can't
I'm probably going to use source hut in the future. It allows contributions via email without an account requirement.
https://sourcehut.org/
give tangled.org a go perhaps. its got the self-hostability that cgit/forgejo does and a the social bits that github does.
codeberg.org is a thing, and it's perfectly suited for open source projects. Many neovim plugins and home lab tech I use are hosted on codeberg with no issues. If you just want to github as social media, you will never be happy.
Huh? Why not? Say "My git repository is here $URL" then if they want to visit and/or clone it, they'll do that, otherwise don't, why does it matter?
Sure, if you're out after reaching the most people, gaining stars or otherwise try to attract "popularity" rather than just sharing and collaborate on code, then I'd understand what you mean. But then I'd begin with questioning your motivation first, that'll be a deeper issue than what SCM platform you use.
gitlab is about as close as you'll get
GitLab annoys me in tons of ways, but I feel it's generally better than GitHub in lots of ways.
At this point it should almost be news when it works.
Multple 9s
Anyone also seeing Active Directory/Entra issues?
even vercel also have more downtime nowadays
If the day ends in Y…
Seems like they just can’t deal with the absolute deluge of AI vomit being uploaded every day.
Good riddance I hope it completely destroys them.
Are you taking about what they write to run the service? Because looking at the uptime, and considering it's Microslop, I wouldn't be surprised.
What they write and the extra demand from vibe coders.
I mean; this is the normal mode of operation for GitHub at this point.
0 nines.
9 nines found somewhere after the decimal point if you measure with enough precision
Azure webapp deploys are also trash right now. Microsoft needs to stop slathering h1b copilot slop and get basic things like Windows patches working.
Business as usual.
I am this > < close to just running Gogs or Forgejo on some Hetzner boxes, quit my job, charge people for access. Why aren't there like 10 startups doing this yet? Please? I want to give you my money. Just give me a git host that doesn't suck. (All the current ones suck)
Codeberg and Sourcehut are doing it for free, for open source. Corporate probably won't ever move off Github, because they need the prestige of using Github - the actual service quality is completely irrelevant. This is an aspect of the enshittocene epoch - I repeat, quality is irrelevant to corporates.
Microsoft again.
I think it is time that Microsoft lets go of GitHub. They are handling it too poorly.
Microslop is destroying Github
Seems like outages are increasingly more frequent nowadays. Obviously, this is not the best state of affairs, and developers should not be limited by services. In the meantime I've been experimenting with building third spaces for people to chill while they wait for the services they are dependent on to go back up.
The first one I've built is a little ASCII hangout for Claude @ https://clawdpenguin.com but threads like this make me want to build it for Github too.