GitHub header
All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Feb 16, 2025
Resolved - This incident has been resolved.
Feb 16, 12:44 UTC
Update - Pull Requests is operating normally.
Feb 16, 12:43 UTC
Update - Webhooks is operating normally.
Feb 16, 12:43 UTC
Update - API Requests is operating normally.
Feb 16, 12:43 UTC
Update - Issues is operating normally.
Feb 16, 12:42 UTC
Update - Codespaces is operating normally.
Feb 16, 12:42 UTC
Update - Git Operations is operating normally.
Feb 16, 12:42 UTC
Update - Actions is operating normally.
Feb 16, 12:42 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Feb 16, 12:24 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Feb 16, 12:10 UTC
Investigating - We are investigating reports of degraded performance for Actions, Codespaces, Git Operations, Issues and Webhooks
Feb 16, 12:08 UTC
Feb 15, 2025
Resolved - This incident has been resolved.
Feb 15, 04:15 UTC
Update - We completed the rollout. GitHub Codespaces are healthy.
Feb 15, 04:15 UTC
Update - We continue the rollout in Central India, SE Asia, and Australia Codespaces regions. We are seeing a minimal number of connection failures across all regions at the moment.
Feb 15, 03:21 UTC
Update - We rolled out a fix to most of our Codespaces regions. Central India, SE Asia, and Australia are the remaining regions to be fixed. Customers in these remaining regions can be experiencing issues with Codespaces connectivity.
Feb 15, 01:47 UTC
Update - Some customers are continuing to see intermittent connection failures to their codespaces. We are monitoring closely to build a better idea of when impact should be mitigated. At this time, we expect the number of impacted users to remain low, and will update again when there is a development in our repair efforts.
Feb 14, 20:53 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Feb 14, 20:22 UTC
Update - Some GitHub codespace users are experiencing intermittent connection failures. A deployment is underway to mitigate the problem, and US-based customers should see recovery soon. Full recovery is expected to take several hours. In the meantime, we advise customers experiencing issues to retry their connection attempts.
Feb 14, 20:12 UTC
Investigating - We are currently investigating this issue.
Feb 14, 20:06 UTC
Feb 14, 2025
Feb 13, 2025

No incidents reported.

Feb 12, 2025
Resolved - This incident has been resolved.
Feb 12, 23:10 UTC
Update - Claude Sonnet is fully available in GitHub Copilot again. If you used an alternate model during the outage, you can switch back to Claude Sonnet.
Feb 12, 23:10 UTC
Update - We are seeing a recovery with our Claude Sonnet model provider. We'll confirm once the problem is fully resolved.
Feb 12, 23:04 UTC
Update - Our Claude Sonnet provider acknowledged the issue. They will provide us with next update by 11:30 AM UTC / 3:30 PM PT. Claude Sonnet remains unavailable in GitHub Copilot, please use an alternate model.
Feb 12, 22:54 UTC
Update - We escalated the issue to our Claude Sonnet model provider. Claude Sonnet remains unavailable in GitHub Copilot, please use an alternate model.
Feb 12, 22:41 UTC
Update - Claude Sonnet is currently not working in GitHub Copilot. Please switch to an alternate model while we're working on resolving the issue.
Feb 12, 21:59 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Feb 12, 21:52 UTC
Investigating - We are currently investigating this issue.
Feb 12, 21:51 UTC
Feb 11, 2025

No incidents reported.

Feb 10, 2025

No incidents reported.

Feb 9, 2025

No incidents reported.

Feb 8, 2025

No incidents reported.

Feb 7, 2025

No incidents reported.

Feb 6, 2025
Resolved - This incident has been resolved.
Feb 6, 11:13 UTC
Update - This issue has been mitigated. We will continue to investigate root causes to ensure this does not reoccur.
Feb 6, 11:13 UTC
Update - We have scaled out database resources and rolled back recent changes and are seeing signs of mitigation, but are monitoring to ensure complete recovery.
Feb 6, 11:05 UTC
Update - We are attempting to scale databases to handle observed load spikes, as well as investigating other mitigation approaches.

Customers may intermittently experience failures to fetch repositories with LFS, as well as increased latency and errors across the API.

Feb 6, 10:29 UTC
Update - We are investigating failed Git LFS requests and potentially slow API requests.

Customers may experience failures to fetch repositories with LFS.

Feb 6, 09:52 UTC
Investigating - We are investigating reports of degraded performance for API Requests
Feb 6, 09:42 UTC
Feb 5, 2025
Resolved - Between Feb 5, 2025 00:34 UTC and 11:16 UTC, up to 7% of organizations using GitHub-hosted larger runners with public IP addresses had those jobs fail to start during the impact window. The issue was caused by a backend migration in the public IP management system, which caused certain public IP address runners to be placed in a non-functioning state.

We have improved the rollback steps for this migration to reduce the time to mitigate any future recurrences, are working to improve automated detection of this error state, and are improving the resiliency of runners to handle this error state without customer impact.

Feb 5, 11:44 UTC
Update - We have identified a configuration change that we believe may be related. We are working to mitigate.
Feb 5, 11:17 UTC
Update - We are continuing investigation
Feb 5, 10:33 UTC
Update - We continue to investigate and have determined this is limited to a subset of larger runner pools.
Feb 5, 09:56 UTC
Update - We are investigating an incident where Actions larger runners are stuck in provisioning for some customers
Feb 5, 09:21 UTC
Investigating - We are currently investigating this issue.
Feb 5, 08:58 UTC
Feb 4, 2025

No incidents reported.

Feb 3, 2025
Resolved - A component that imports external git repositories into GitHub had an incident that was caused by the improper internal configuration of a gem. We have since rolled back to a stable version, and all migrations are able to resume.
Feb 3, 19:37 UTC
Feb 2, 2025

No incidents reported.