Its one of those statements that are a bit nuanced. What I should have said is because a lot of git repos exist there, many services just pull from the git repos. npm/yarn, some pip, lots of ruby gems, etc…etc… so the times GitHub DOES go down, it causes mass outages at work. CI jobs stop working, developer envs just stop when package managers cant pull. Heck some languages use git urls AS the package manager. For better or worse the modern software development cycle depends on somewhat reliable git sources. And a vast majority of those are on github.
Last time github failed for a couple of hours, we were in the middle of a deploy for around 600,000 people. That was a fun experience. Learned our lesson!
You are correct though, in theory its all git, so we could (and now have) set up mirrored git repos for all our dependencies and code. But we didnt cause we were lazy.
Having a single centralized source will always give those issues. It can go down either temporarily or permanently. It is all part of the conveniance/single-point-of-failure scale.
In the short run it going down will cause some issues, which can be mitigated by having local mirrors of critical repos. However, moving to another place should in theory be as easy as replacing github.com with gitlab, codeberg, your-local-git-server url, etc (and auth info of course)
Actually testing what will happen if github and/or other services are down and see how your product or build pipeline handles it, is a very good thing to do, but very rarely is it done. It can be easily accomplished by for example adding a drop rule in iptables. Testing for bad things never seems to happen though, and then when it really is a problem nothing works and everyone panics.
Its one of those statements that are a bit nuanced. What I should have said is because a lot of git repos exist there, many services just pull from the git repos. npm/yarn, some pip, lots of ruby gems, etc…etc… so the times GitHub DOES go down, it causes mass outages at work. CI jobs stop working, developer envs just stop when package managers cant pull. Heck some languages use git urls AS the package manager. For better or worse the modern software development cycle depends on somewhat reliable git sources. And a vast majority of those are on github.
Last time github failed for a couple of hours, we were in the middle of a deploy for around 600,000 people. That was a fun experience. Learned our lesson!
You are correct though, in theory its all git, so we could (and now have) set up mirrored git repos for all our dependencies and code. But we didnt cause we were lazy.
Having a single centralized source will always give those issues. It can go down either temporarily or permanently. It is all part of the conveniance/single-point-of-failure scale.
In the short run it going down will cause some issues, which can be mitigated by having local mirrors of critical repos. However, moving to another place should in theory be as easy as replacing github.com with gitlab, codeberg, your-local-git-server url, etc (and auth info of course)
Actually testing what will happen if github and/or other services are down and see how your product or build pipeline handles it, is a very good thing to do, but very rarely is it done. It can be easily accomplished by for example adding a drop rule in iptables. Testing for bad things never seems to happen though, and then when it really is a problem nothing works and everyone panics.