GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations Operational
Webhooks Operational
Visit www.githubstatus.com for more information Operational
API Requests Operational
Issues Operational
Pull Requests Operational
Actions Operational
Packages Operational
Pages Operational
Codespaces Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Dec 11, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 11, 20:05 UTC
Update - We see signs of full recovery and will post a more in-depth update soon.
Dec 11, 20:05 UTC
Update - We are continuing to monitor and continuing to see signs of recovery. We will update when we are confident that we are in full recovery.
Dec 11, 19:58 UTC
Update - We've applied a mitigation to fix intermittent failures in anonymous requests and downloads from GitHub, including Login, Signup, Logout, and some requests from within Actions jobs. We are seeing improvements in telemetry, but we will continue to monitor for full recovery.
Dec 11, 19:04 UTC
Update - We currently have ~7% of users experiencing errors when attempting to sign up, log in, or log out. We are deploying a change to mitigate these failures.
Dec 11, 18:47 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Dec 11, 18:40 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 11, 17:53 UTC
Update - Git Operations is operating normally.
Dec 11, 17:20 UTC
Update - We believe that we have narrowed down our affected users to primarily those that are signing up or signing in as well as logged out usage. We are currently continuing to investigate the root cause and are working multiple mitigation angles.
Dec 11, 17:19 UTC
Update - We are experiencing intermittent web request failures across multiple services, including login and authentication. Our teams are actively investigating the cause and working on mitigation.
Dec 11, 16:41 UTC
Update - Codespaces, Copilot, Git Operations, Packages, Pages, Pull Requests and Webhooks are experiencing degraded performance. We are continuing to investigate.
Dec 11, 16:09 UTC
Update - API Requests and Actions are experiencing degraded performance. We are continuing to investigate.
Dec 11, 16:01 UTC
Investigating - We are investigating reports of degraded performance for Issues
Dec 11, 15:47 UTC
Dec 10, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 10, 14:52 UTC
Update - We've applied a mitigation to ensure all macOS jobs route to macOS fulfillers and are monitoring for full recovery.
Dec 10, 14:32 UTC
Investigating - We are investigating reports of degraded performance for Actions
Dec 10, 13:34 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 10, 11:05 UTC
Update - Actions is operating normally.
Dec 10, 11:05 UTC
Update - We have validated the mitigation and are no longer seeing impact.
Dec 10, 11:05 UTC
Update - We are seeing improvements in telemetry and are monitoring for full recovery.
Dec 10, 10:58 UTC
Update - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We will continue monitoring to confirm whether this resolves the issue.
Dec 10, 10:25 UTC
Update - The team continues to investigate issues with some Actions jobs being queued for a long time. We will continue providing updates on the progress towards mitigation.
Dec 10, 09:41 UTC
Update - We're investigating Actions workflow runs taking longer than expected to start.
Dec 10, 09:13 UTC
Investigating - We are investigating reports of degraded performance for Actions
Dec 10, 09:11 UTC
Dec 9, 2025

No incidents reported.

Dec 8, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 8, 22:33 UTC
Update - We are beginning to see signs of resolution after applying a mitigation. We expect full resolution within approximately 30 minutes.
Dec 8, 22:10 UTC
Update - We're continuing to investigate and mitigate issues with the GPT 4o model for Copilot completions. Users can currently work around this issue by updating their VS Code settings with "github.copilot.advanced.debug.overrideEngine": "gpt-41-copilot".
Dec 8, 22:04 UTC
Update - We are currently investigating failures with the GPT 4o model for Copilot completions.
Dec 8, 21:32 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Dec 8, 21:28 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 8, 21:06 UTC
Update - We are investigating an incident affecting missing Agent Session data on the AI Settings page on Agent Control Plane.
Dec 8, 19:52 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Dec 8, 19:51 UTC
Dec 7, 2025

No incidents reported.

Dec 6, 2025

No incidents reported.

Dec 5, 2025
Resolved - On December 5th, 2025, between 12:00 pm UTC and 9:00 pm UTC, our Team Synchronization service experienced a significant degradation, preventing over 209,000 organization teams from syncing their identity provider (IdP) groups. The incident was triggered by a buildup of synchronization requests, resulting in elevated Redis key usage and high CPU consumption on the underlying Redis cluster.

To mitigate further impact, we proactively paused all team synchronization requests between 3:00 pm UTC and 8:15 pm UTC, allowing us to stabilize the Redis cluster. Our engineering team also resolved the issue by flushing the affected Redis keys and queues, which promptly stopped runaway growth and restored service health. Additionally, we scaled up our infrastructure resources to improve our ability to process the high volume of synchronization requests. All pending team synchronizations were successfully processed following service restoration.

We are working to strengthen the Team Synchronization service by implementing a killswitch, adding throttling to prevent excessive enqueueing of synchronization requests, and improving the scheduler to avoid duplicate job requests. Additionally, we’re investing in better observability to alert when job drops occur. These efforts are focused on preventing similar incidents and improving overall reliability going forward.

Dec 5, 22:20 UTC
Update - We believe we reached a scaling limit and are increasing the amount of resources available to reduce the delays for the team synchronization process.
Dec 5, 21:40 UTC
Update - We're continuing to investigate the delays in the team synchronization and will report back once we have more information.
Dec 5, 19:17 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Dec 5, 18:38 UTC
Dec 4, 2025

No incidents reported.

Dec 3, 2025

No incidents reported.

Dec 2, 2025

No incidents reported.

Dec 1, 2025

No incidents reported.

Nov 30, 2025

No incidents reported.

Nov 29, 2025

No incidents reported.

Nov 28, 2025
Resolved - On November 28th, 2025, between approximately 05:51 and 08:04 UTC, Copilot experienced an outage affecting the Claude Sonnet 4.5 model. Users attempting to use this model received an HTTP 400 error, resulting in 4.6% of total chat requests during this timeframe failing. Other models were not impacted.

The issue was caused by a misconfiguration deployed to an internal service which made Claude Sonnet 4.5 unavailable. The problem was identified and mitigated by reverting the change. GitHub is working to improve cross-service deploy safeguards and monitoring to prevent similar incidents in the future.

Nov 28, 08:23 UTC
Update - We have rolled out a fix and are monitoring for recovery.
Nov 28, 07:52 UTC
Update - We are investigating degraded performance with the Claude Sonnet 4.5 model in Copilot.
Nov 28, 07:04 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Nov 28, 06:59 UTC
Nov 27, 2025

No incidents reported.