Movatterモバイル変換


[0]ホーム

URL:


GitHub headerGitHub header
Get email notifications whenever GitHubcreates,updates orresolves an incident.

Resend OTP in: seconds

Didn't receive the OTP?Resend OTP

By subscribing you agree to ourPrivacy Policy.
This site is protected by reCAPTCHA and the GooglePrivacy Policy andTerms of Service apply.
Get text message notifications whenever GitHubcreates orresolves an incident.
Change number

Resend OTP in:30 seconds

Didn't receive the OTP?Resend OTP

Message and data rates may apply. By subscribing you agree to ourPrivacy Policy, the AtlassianTerms of Service, and the AtlassianPrivacy Policy. This site is protected by reCAPTCHA and the GooglePrivacy Policy andTerms of Service apply.
Get incident updates and maintenance status messages in Slack.
Subscribe via Slack
By subscribing you acknowledge ourPrivacy Policy. In addition, you agree to the AtlassianCloud Terms of Service and acknowledge Atlassian'sPrivacy Policy.
Get webhook notifications whenever GitHubcreates an incident,updates an incident,resolves an incident orchanges a component status.

The URL we should send the webhooks to

We'll send you email if your endpoint fails

By subscribing you agree to ourPrivacy Policy.
This site is protected by reCAPTCHA and the GooglePrivacy Policy andTerms of Service apply.
Follow @githubstatus or view our profile.
Visit oursupport site.
Get theAtom Feed orRSS Feed.

All Systems Operational

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia:au.githubstatus.com
- EU:eu.githubstatus.com
- Japan:jp.githubstatus.com
- US:us.githubstatus.com

Uptime over the past90 days.View historical uptime.
Git Operations Operational
90 days ago
99.92 % uptime
Today
Webhooks Operational
90 days ago
99.88 % uptime
Today
Visit www.githubstatus.com for more information Operational
API Requests Operational
90 days ago
99.91 % uptime
Today
Issues Operational
90 days ago
99.76 % uptime
Today
Pull Requests Operational
90 days ago
99.77 % uptime
Today
Actions Operational
90 days ago
99.47 % uptime
Today
Packages Operational
90 days ago
99.96 % uptime
Today
Pages Operational
90 days ago
99.9 % uptime
Today
Codespaces Operational
90 days ago
99.65 % uptime
Today
Copilot Operational
90 days ago
99.6 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.

Related

    No incidents or maintenance related to this downtime.

    Past Incidents

    Feb20,2026
    Incident with Copilot GPT-5.1-Codex
    Resolved -This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
    Feb20,11:41 UTC
    Update -The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].
    We will continue monitoring to ensure stability, but mitigation is complete.

    Feb20,11:19 UTC
    Update -We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

    Feb20,10:36 UTC
    Update -We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
    Other models are available and working as expected.

    Feb20,10:02 UTC
    Investigating -We are investigating reports of degraded performance for Copilot
    Feb20,10:02 UTC
    Feb19,2026

    No incidents reported.

    Feb18,2026
    Degraded performance in merge queue
    Resolved -This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
    Feb18,19:20 UTC
    Update -We have seen significant recovery in merge queue we are continuing to monitor for any other degraded services.
    Feb18,19:18 UTC
    Update -We are investigating reports of issues with merge queue. We will continue to keep users updated on progress towards mitigation.
    Feb18,18:27 UTC
    Update -Pull Requests is experiencing degraded performance. We are continuing to investigate.
    Feb18,18:26 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb18,18:25 UTC
    Feb17,2026
    Intermittent authentication failures on GitHub
    Resolved -This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
    Feb17,19:06 UTC
    Update -We are continuing to monitor the mitigation and continuing to see signs of recovery.
    Feb17,18:55 UTC
    Update -We have rolled out a mitigation and are seeing signs of recovery and are continuing to monitor.
    Feb17,18:18 UTC
    Update -We have identified a low rate of authentication failures affecting GitHub App server to server tokens, GitHub Actions authentication tokens, and git operations. Some customers may experience intermittent API request failures when using these tokens. We believe we've identified the cause and are working to mitigate impact.
    Feb17,17:46 UTC
    Investigating -We are investigating reports of degraded performance for Actions and Git Operations
    Feb17,17:46 UTC
    Feb16,2026

    No incidents reported.

    Feb15,2026

    No incidents reported.

    Feb14,2026

    No incidents reported.

    Feb13,2026
    Disruption with some GitHub services regarding file upload
    Resolved -This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
    Feb13,22:58 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb13,22:30 UTC
    Feb12,2026
    Disruption with some GitHub services
    Resolved -Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.

    The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.

    Feb12,20:34 UTC
    Update -Next Edit Suggestions availability is recovering. We are continuing to monitor until fully restored.
    Feb12,19:59 UTC
    Update -We are experiencing degraded availability in Australia and Brazil for Copilot completions and suggestions. We are working to resolve the issue

    Feb12,19:18 UTC
    Update -We are experiencing degraded availability in Australia for Copilot completions and suggestions. We are working to resolve the issue

    Feb12,18:46 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb12,18:36 UTC
    Intermittent disruption with Copilot completions and inline suggestions
    Resolved -Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.

    The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.

    Feb12,16:50 UTC
    Update -We are experiencing degraded availability in Western Europe for Copilot completions and suggestions. We are working to resolve the issue.

    Feb12,15:33 UTC
    Update -We are experiencing degraded availability in some regions for Copilot completions and suggestions. We are working to resolve the issue.
    Feb12,14:08 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb12,14:06 UTC
    Disruption with some GitHub services
    Resolved -From Feb 12, 2026 09:16:00 UTC to Feb 12, 2026 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by deploying a corrupt configuration bundle, resulting in missing data used for network interface connections by the service.

    We mitigated the incident by applying the correct configuration to each site. We have added checks for corruption in this deployment, and will add auto-rollback detection for this service to prevent issues like this in the future.

    Feb12,11:12 UTC
    Update -We have resolved the issue and are seeing full recovery.
    Feb12,11:01 UTC
    Update -We are investigating an issue with downloading repository archives that include Git LFS objects.
    Feb12,10:39 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb12,10:38 UTC
    Incident with Codespaces
    Resolved -On February 12, 2026, between 00:51 UTC and 09:35 UTC, users attempting to create or resume Codespaces experienced elevated failure rates across Europe, Asia and Australia, peaking at a 90% failure rate.

    The disconnects were triggered by a bad configuration rollout in a core networking dependency, which led to internal resource provisioning failures. We are working to improve our alerting thresholds to catch issues before they impact customers and strengthening rollout safeguards to prevent similar incidents.

    Feb12,09:56 UTC
    Update -Recovery looks consistent with Codespaces creating and resuming successfully across all regions.

    Thank you for your patience.

    Feb12,09:56 UTC
    Update -Codespaces is experiencing degraded performance. We are continuing to investigate.
    Feb12,09:42 UTC
    Update -We are seeing widespread recovery across all our regions.

    We will continue to monitor progress and will resolve the incident when we are confident on durable recovery.

    Feb12,09:39 UTC
    Update -We have identified the issue causing Codespace create/resume actions to fail and are applying a fix. This is estimated to take ~2 hours to complete but impact will begin to reduce sooner than that.

    We will continue to monitor recovery progress and will report back when more information is available.

    Feb12,09:04 UTC
    Update -We now understand the source of the VM create/resume failures and are working with our partners to mitigate the impact.
    Feb12,08:32 UTC
    Update -We are seeing an increase in Codespaces creation and resuming failures across multiple regions, primarily in EMEA. Our team are analysing the situation and are working to mitigate this impact.

    While we are working, customers are advised to create Codespaces in US East and US West regions via the "New with options..." button when creating a Codespace.

    More updates as we have them.

    Feb12,08:02 UTC
    Investigating -We are investigating reports of degraded availability for Codespaces
    Feb12,07:53 UTC
    Disruption with some GitHub services
    Resolved -On February 11 between 16:37 UTC and 00:59 UTC the following day, 4.7% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 37 minutes. Standard Hosted and self-hosted runners were not impacted.

    This incident was caused by capacity degradation in Central US for Larger Hosted Runners. Workloads not pinned to that region were picked up by other regions, but were delayed as those regions became saturated. Workloads configured with private networking in that region were delayed until compute capacity in that region recovered. The issue was mitigated by rebalancing capacity across internal and external workloads and general increases in capacity in affected regions to speed recovery.

    In addition to working with our compute partners on the core capacity degradation, we are working to ensure other regions are better able to absorb load with less delay to customer workloads. For pinned workflows using private networking, we are shipping support soon for customers to failover if private networking is configured in a paired region.

    Feb12,00:59 UTC
    Update -Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted.

    The issue is mitigated and we are monitoring recovery.

    Feb11,21:33 UTC
    Update -We're continuing to work toward mitigation with our capacity provider, and adding capacity.
    Feb11,19:37 UTC
    Update -Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted.

    We're working with the capacity provider to mitigate the impact.

    Feb11,19:00 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb11,18:58 UTC
    Feb11,2026
    Incident with API Requests
    Resolved -On February 11, 2026, between 13:51 UTC and 17:03 UTC, the GraphQL API experienced degraded performance due to elevated resource utilization. This resulted in incoming client requests waiting longer than normal, timing out in certain cases. During the impact window, approximately 0.65% of GraphQL requests experienced these issues, peaking at 1.06%.

    The increased load was due to an increase in query patterns that drove higher than expected resource utilization of the GraphQL API. We mitigated the incident by scaling out resource capacity and limiting the capacity available to these query patterns.

    We're improving our telemetry to identify slow usage growth and changes in GraphQL workloads. We’ve also added capacity safeguards to prevent similar incidents in the future.

    Feb11,17:15 UTC
    Update -We've observed recovery for the GraphQL service latency.
    Feb11,17:13 UTC
    Update -We're continuing to remediate the service degradation and scaling out to further mitigate the potential for latency impact.
    Feb11,16:54 UTC
    Update -We've identified a dependency of GraphQL that is in a degraded state and are working on remediating the issue.
    Feb11,15:54 UTC
    Update -We're investigating increased latency for GraphQL traffic.
    Feb11,15:27 UTC
    Investigating -We are investigating reports of degraded performance for API Requests
    Feb11,15:26 UTC
    Incident with Copilot
    Resolved -On February 11, 2025, between 14:30 UTC and 15:30 UTC, the Copilot service experienced degraded availability for requests to Claude Haiku 4.5. During this time, on average 10% of requests failed with 23% of sessions impacted. The issue was caused by an upstream problem from multiple external model providers that affected our ability to serve requests.

    The incident was mitigated once one of the providers resolved the issue and we rerouted capacity fully to that provider. We have improved our telemetry to improve incident observability and implemented an automated retry mechanism for requests to this model to mitigate similar future upstream incidents.

    Feb11,15:46 UTC
    Update -Copilot is operating normally.
    Feb11,15:46 UTC
    Update -The issues with our upstream model provider have been resolved, and Claude Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.

    We will continue monitoring to ensure stability, but mitigation is complete.

    Feb11,15:46 UTC
    Update -We are experiencing degraded availability for the Claude Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
    Other models are available and working as expected.

    Feb11,15:27 UTC
    Investigating -We are investigating reports of degraded performance for Copilot
    Feb11,15:26 UTC
    Feb10,2026
    Disruption with some GitHub services
    Resolved -On February 10th, 2026, between 14:35 UTC and 15:58 UTC web experiences on GitHub.com were degraded including Pull Requests and Authentication, resulting in intermittent 5xx errors and timeouts. The error rate on web traffic peaked at approximately 2%. This was due to increased load on a critical database, which caused significant memory pressure resulting in intermittent errors.

    We mitigated the incident by applying a configuration change to the database to increase available memory on the host.

    We are working to identify changes in load patterns and are reviewing the configuration of our databases to ensure there is sufficient capacity to meet growth. Additionally, we are improving monitoring and self-healing functionalities for database memory issues to reduce our time to detect and mitigation.

    Feb10,15:58 UTC
    Update -Pull Requests is operating normally.
    Feb10,15:58 UTC
    Update -We have deployed a mitigation for the issue and are observing what we believe is the start of recovery. We will continue to monitor.
    Feb10,15:51 UTC
    Update -We believe we have found the cause of the problem and are working on mitigation.
    Feb10,15:47 UTC
    Update -We continue investigating intermittent timeouts on some pages.
    Feb10,15:33 UTC
    Update -Pull Requests is experiencing degraded performance. We are continuing to investigate.
    Feb10,15:08 UTC
    Update -We are seeing intermittent timeouts on some pages and are investigating.
    Feb10,15:08 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb10,15:07 UTC
    Copilot Policy Propagation Delays
    Resolved -This incident has been resolved.
    Feb10,09:57 UTC
    Update -Copilot is operating normally.
    Feb10,00:51 UTC
    Update -We're continuing to address an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them.

    This issue is understand and we are working to get the mitigation applied. Next update in one hour.

    Feb10,00:26 UTC
    Update -We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.

    This may prevent newly enabled models from appearing when users try to access them.

    Next update in two hours.

    Feb 9,22:09 UTC
    Update -We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.

    This may prevent newly enabled models from appearing when users try to access them.

    Next update in two hours.

    Feb 9,20:39 UTC
    Update -We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.

    This may prevent newly enabled models from appearing when users try to access them.

    Next update in two hours.

    Feb 9,18:49 UTC
    Update -We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.

    This may prevent newly enabled models from appearing when users try to access them.

    Feb 9,18:06 UTC
    Update -We're continuing to investigate a an issue where Copilot policy updates are not propagating correctly for all customers.

    This may prevent newly enabled models from appearing when users try to access them.

    Feb 9,17:24 UTC
    Update -We’ve identified an issue where Copilot policy updates are not propagating correctly for some customers. This may prevent newly enabled models from appearing when users try to access them.

    The team is actively investigating the cause and working on a resolution. We will provide updates as they become available.

    Feb 9,16:30 UTC
    Investigating -We are investigating reports of degraded performance for Copilot
    Feb 9,16:29 UTC
    Feb 9,2026
    Incident with Issues, Actions and Git Operations
    Resolved -On February 9, 2026, GitHub experienced two related periods of degraded availability affecting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents.

    During both incidents, users encountered errors loading pages on GitHub.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including GitHub Issues, pull requests, webhooks, Dependabot, GitHub Pages, and GitHub Codespaces experienced intermittent errors. SSH-based Git operations were not affected during either incident.

    Our investigation determined that both incidents shared the same underlying cause: a configuration change to a user settings caching mechanism caused a large volume of cache rewrites to occur simultaneously. During the first incident, asynchronous rewrites overwhelmed a shared infrastructure component responsible for coordinating background work, triggering cascading failures. Increased load caused the service responsible for proxying Git operations over HTTPS to exhaust available connections, preventing it from accepting new requests. We mitigated this incident by disabling async cache rewrites and restarting the affected Git proxy service across multiple datacenters.

    An additional source of updates to the same cache circumvented our initial mitigations and caused the second incident. This generated a high volume of synchronous writes, causing replication delays that cascaded in a similar pattern and again exhausted the Git proxy’s connection capacity, degrading availability across multiple services. We mitigated by disabling the source of the cache rewrites and again restarting Git proxy.

    We know these incidents disrupted the workflows of millions of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, GitHub's availability is not yet meeting our expectations. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:

    1. We have already optimized the caching mechanism to avoid write amplification and added self-throttling during bulk updates.
    2. We are adding safeguards to ensure the caching mechanism responds more quickly to rollbacks and strengthening how changes to these caching systems are planned, validated, and rolled out with additional checks.
    3. We are fixing the underlying cause of connection exhaustion in our Git HTTPS proxy layer so the proxy can recover from this failure mode automatically without requiring manual restarts.

    GitHub is critical infrastructure for your work, your teams, and your businesses. We're focusing on these mitigations and long-term infrastructure work so GitHub is available, at scale, when and where you need it.

    Feb 9,20:09 UTC
    Update -Actions, Codespaces, Git Operations, Issues, Packages, Pages, Pull Requests and Webhooks are operating normally.
    Feb 9,20:09 UTC
    Update -We are seeing all services have returned to normal processing.
    Feb 9,20:08 UTC
    Update -A number of services have recovered, but we are continuing to investigate issues with Dependabot, Actions, and a number of other services.

    We will continue to investigate and monitor for full recovery.

    Feb 9,19:54 UTC
    Update -Codespaces is experiencing degraded performance. We are continuing to investigate.
    Feb 9,19:31 UTC
    Update -We have applied mitigations and are seeing signs of recovery.

    We will continue to monitor for full recovery.

    Feb 9,19:29 UTC
    Update -Packages is experiencing degraded performance. We are continuing to investigate.
    Feb 9,19:10 UTC
    Update -Pull Requests is experiencing degraded performance. We are continuing to investigate.
    Feb 9,19:07 UTC
    Update -We are seeing impact to several systems including Actions, Copilot, Issues, and Git.

    Customers may see slow and failed requests, and Actions jobs being delayed.

    We are investigating.

    Feb 9,19:07 UTC
    Update -Webhooks is experiencing degraded performance. We are continuing to investigate.
    Feb 9,19:07 UTC
    Update -Pages is experiencing degraded performance. We are continuing to investigate.
    Feb 9,19:05 UTC
    Update -Actions is experiencing degraded availability. We are continuing to investigate.
    Feb 9,19:02 UTC
    Investigating -We are investigating reports of degraded performance for Actions, Git Operations and Issues
    Feb 9,19:01 UTC
    Notifications are delayed
    Resolved -On February 9th notifications service started showing degradation around 13:50 UTC, resulting in an increase in notification delivery delays. Our team started investigating.

    Around 14:30 UTC the service started to recover as the team continued investigating the incident. Around 15:20 UTC degradation resurfaced, with increasing delays in notification deliveries and small error rate (below 1%) on UI and API endpoints related to notifications.

    At 16:30 UTC, we mitigated the incident by reducing contention through throttling workloads and performing a database failover. The median delay for notification deliveries was 80 minutes at this point and queues started emptying. Around 19:30 UTC the backlog of notifications was processed, bringing the service back to normal and declaring the incident closed.

    The incident was caused by the notifications database showing degradation under intense load. Most notifications-related asynchronous workloads, including notifications deliveries, were stopped to try to reduce the pressure on the database. To ensure system stability, a database failover was executed. Following the failover, we applied a configuration change to improve the performance. The service started recovering after these changes.

    We are reviewing the configuration of our databases to understand the performance drop and prevent similar issues from happening in the future. We are also investing in monitoring to detect and mitigate this class of incidents faster.

    Feb 9,19:29 UTC
    Update -We continue observing recovery of the notifications. Notification delivery delays have been resolved.
    Feb 9,19:14 UTC
    Update -We are continuing to recover from notification delivery delays. Notifications are currently being delivered with an average delay of approximately 15 minutes. We are working through the remaining backlog.
    Feb 9,18:33 UTC
    Update -We are continuing to recover from notification delivery delays. Notifications are currently being delivered with an average delay of approximately 30 minutes. We are working through the remaining backlog.
    Feb 9,17:57 UTC
    Update -We are seeing recovery in notification delivery. Notifications are currently being delivered with an average delay of approximately 1 hour as we work through the backlog. We continue to monitor the situation closely.
    Feb 9,17:25 UTC
    Update -We continue to investigate delays in notification delivery with average delivery latency now nearing 1 hour 20 minutes. We are just now starting to see some signs of recovery.
    Feb 9,16:51 UTC
    Update -We are investigating notification delivery delays with the current delay being around 50 minutes. We are working on mitigation.
    Feb 9,16:12 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb 9,15:54 UTC
    Incident with Pull Requests
    Resolved -On February 9, 2026, GitHub experienced two related periods of degraded availability affecting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents.

    During both incidents, users encountered errors loading pages on GitHub.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including GitHub Issues, pull requests, webhooks, Dependabot, GitHub Pages, and GitHub Codespaces experienced intermittent errors. SSH-based Git operations were not affected during either incident.

    Our investigation determined that both incidents shared the same underlying cause: a configuration change to a user settings caching mechanism caused a large volume of cache rewrites to occur simultaneously. During the first incident, asynchronous rewrites overwhelmed a shared infrastructure component responsible for coordinating background work, triggering cascading failures. Increased load caused the service responsible for proxying Git operations over HTTPS to exhaust available connections, preventing it from accepting new requests. We mitigated this incident by disabling async cache rewrites and restarting the affected Git proxy service across multiple datacenters.

    An additional source of updates to the same cache circumvented our initial mitigations and caused the second incident. This generated a high volume of synchronous writes, causing replication delays that cascaded in a similar pattern and again exhausted the Git proxy’s connection capacity, degrading availability across multiple services. We mitigated by disabling the source of the cache rewrites and again restarting Git proxy.

    We know these incidents disrupted the workflows of millions of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, GitHub's availability is not yet meeting our expectations. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:

    1. We have already optimized the caching mechanism to avoid write amplification and added self-throttling during bulk updates.
    2. We are adding safeguards to ensure the caching mechanism responds more quickly to rollbacks and strengthening how changes to these caching systems are planned, validated, and rolled out with additional checks.
    3. We are fixing the underlying cause of connection exhaustion in our Git HTTPS proxy layer so the proxy can recover from this failure mode automatically without requiring manual restarts.

    GitHub is critical infrastructure for your work, your teams, and your businesses. We're focusing on these mitigations and long-term infrastructure work so GitHub is available, at scale, when and where you need it.

    Feb 9,17:40 UTC
    Update -Pull Requests is operating normally.
    Feb 9,17:40 UTC
    Update -Webhooks is operating normally.
    Feb 9,17:39 UTC
    Update -Actions is operating normally.
    Feb 9,17:37 UTC
    Update -We are seeing recovery across all products and are continuing to monitor service health.
    Feb 9,17:32 UTC
    Update -Pages is operating normally.
    Feb 9,17:29 UTC
    Update -Git Operations is operating normally.
    Feb 9,17:26 UTC
    Update -Issues is operating normally.
    Feb 9,17:25 UTC
    Update -Pages is experiencing degraded performance. We are continuing to investigate.
    Feb 9,17:08 UTC
    Update -We have identified the cause of high error rates and taken steps to mitigate. We see early signs of recovery but are continuing to monitor impact.
    Feb 9,16:58 UTC
    Update -Issues is experiencing degraded performance. We are continuing to investigate.
    Feb 9,16:50 UTC
    Update -Webhooks is experiencing degraded performance. We are continuing to investigate.
    Feb 9,16:40 UTC
    Update -Git Operations is experiencing degraded performance. We are continuing to investigate.
    Feb 9,16:40 UTC
    Update -Actions is experiencing degraded performance. We are continuing to investigate.
    Feb 9,16:22 UTC
    Update -We are seeing intermittent errors on many pages and API requests and are investigating.
    Feb 9,16:21 UTC
    Update -Issues is experiencing degraded availability. We are continuing to investigate.
    Feb 9,16:20 UTC
    Investigating -We are investigating reports of degraded performance for Pull Requests
    Feb 9,16:19 UTC
    Incident with Actions
    Resolved -On February 9th, 2026, between 09:16 UTC and 15:12 UTC GitHub Actions customers experienced run start delays. Approximately 0.6% of runs across 1.8% of repos were affected, with an average delay of 19 minutes for those delayed runs.

    The incident occurred when increased load exposed a bottleneck in our event publishing system, causing one compute node to fall behind on processing Actions Jobs. We mitigated by rebalancing traffic and increasing timeouts for event processing. We have since isolated performance critical events to a new, dedicated publisher to prevent contention between events and added safeguards to better tolerate processing timeouts.

    Feb 9,15:46 UTC
    Update -Actions is operating normally.
    Feb 9,15:46 UTC
    Update -Actions run delays have returned to normal levels.
    Feb 9,15:46 UTC
    Update -We identified a bottleneck in our processing pipeline and have applied mitigations. We will continue to monitor for full recovery.
    Feb 9,15:26 UTC
    Update -We continue to investigate an issue causing Actions run start delays, impacting approximately 4% of users.
    Feb 9,14:54 UTC
    Update -We are investigating an issue with Actions run start delays, impacting approximately 4% of users.
    Feb 9,14:17 UTC
    Investigating -We are investigating reports of degraded performance for Actions
    Feb 9,14:17 UTC
    Degraded performance for Copilot Coding Agent
    Resolved -On February 9, 2026, between ~06:00 UTC and ~12:12 UTC, Copilot Coding Agent and related Copilot API endpoints experienced degraded availability. The primary impact was to agent-based workflows (requests to /agents/swe/*, including custom agent configuration checks), where 154k users saw failed requests and error responses in their editor/agent experience. Impact was concentrated among users and integrations actively using Copilot Coding Agent with VS Code.

    The degradation was caused by an unexpected surge in traffic to the related API endpoints that exceeded an internal secondary rate limit. That resulted in upstream request denials which were surfaced to users as elevated 500 errors.

    We mitigated the incident by deploying a change that increased the applicable rate limit for this traffic, which allowed requests to complete successfully and returned the service to normal operation.

    After the mitigation, we deployed guardrails with applicable caching to avoid a repeat of similar incidents. We also temporarily increased infrastructure capacity to better handle backlog recovery from the rate limiting. We're are improving monitoring around growing agentic API endpoints.

    Feb 9,12:12 UTC
    Update -We are continuing to investigate the degraded availability for Copilot Coding Agent.
    Feb 9,11:14 UTC
    Update -We are investigating degraded availability for Copilot Coding Agent. We will continue to keep users updated on progress towards mitigation.
    Feb 9,10:04 UTC
    Investigating -We are investigating reports of impacted performance for some GitHub services.
    Feb 9,10:01 UTC
    Degraded Performance in Webhooks API and UI, Pull Requests
    Resolved -On February 9, 2026, between 07:05 UTC and 11:26 UTC, GitHub experienced intermittent degradation across Issues, Pull Requests, Webhooks, Actions, and Git operations. Approximately every 30 minutes, users encountered brief periods of elevated errors and timeouts lasting roughly 15 seconds each. During the incident window, approximately 1–2% of requests were impacted across these services, with Git operations experiencing up to 7% error rates during individual spikes. GitHub Actions saw up to 2% of workflow runs delayed by a median of approximately 7 minutes due to backups created during these periods.

    This was due to multiple resource-intensive workloads running simultaneously, which caused intermittent processing delays on the data storage layer. We mitigated the incident by scaling storage to a larger compute capacity, which resolved the processing delays.

    We are working to improve detection of resource-intensive queries, identify changes in load patterns, and enhance our monitoring to reduce our time to detection and mitigation of issues like this one in the future.

    Feb 9,11:26 UTC
    Update -Actions is operating normally.
    Feb 9,11:26 UTC
    Update -Issues is operating normally.
    Feb 9,11:26 UTC
    Update -Webhooks is operating normally.
    Feb 9,11:26 UTC
    Update -Pull Requests is operating normally.
    Feb 9,11:26 UTC
    Update -We have identified a faulty infrastructure component and have failed over to a healthy instance. We are continuing to monitor the system for recovery.
    Feb 9,11:11 UTC
    Update -Git Operations is operating normally.
    Feb 9,11:04 UTC
    Update -We are continuing to investigate intermittent elevated timeouts across the service.
    Feb 9,10:48 UTC
    Update -Git Operations is experiencing degraded performance. We are continuing to investigate.
    Feb 9,10:33 UTC
    Update -We are continuing to investigate intermittent elevated timeouts across the service.
    Feb 9,10:09 UTC
    Update -We are continuing to investigate intermittent elevated timeouts across the service. Current impact is estimated around 1% or less of requests.
    Feb 9,09:31 UTC
    Update -Actions is experiencing degraded performance. We are continuing to investigate.
    Feb 9,09:23 UTC
    Update -We are continuing to investigate intermittent elevated timeouts.
    Feb 9,08:52 UTC
    Update -We are investigating intermittent latency and errors with Webhooks API, Webhooks UI, and PRs. We will continue to keep users updated on progress towards mitigation.
    Feb 9,08:17 UTC
    Update -Issues is experiencing degraded performance. We are continuing to investigate.
    Feb 9,08:17 UTC
    Investigating -We are investigating reports of degraded performance for Pull Requests and Webhooks
    Feb 9,08:15 UTC
    Feb 8,2026

    No incidents reported.

    Feb 7,2026

    No incidents reported.

    Feb 6,2026
    Incident with Pull Requests
    Resolved -On February 6, 2026, between 17:49 UTC and 18:36 UTC, the GitHub Mobile service was degraded, and some users were unable to create pull request review comments on deleted lines (and in some cases, comments on deleted files). This impacted users on the newer comment-positioning flow available in version 1.244.0 of the mobile apps. Telemetry indicated that the failures increased as the Android rollout progressed. This was due to a defect in the new comment-positioning workflow that could result in the server rejecting comment creation for certain deleted-line positions.

    We mitigated the incident by halting the Android rollout and implementing interim client-side fallback behavior while a platform fix is in progress. The client-side fallback is scheduled to be published early this week. We are working to (1) add clearer client-side error handling (avoid infinite spinners), (2) improve monitoring/alerting for these failures, and (3) adopt stable diff identifiers for diff-based operations to reduce the likelihood of recurrence.

    Feb 6,18:36 UTC
    Update -Some GitHub Mobile app users may be unable to add review comments on deleted lines in pull requests. We're working on a fix and expect to release it early next week.
    Feb 6,18:36 UTC
    Update -Pull Requests is operating normally.
    Feb 6,18:04 UTC
    Update -We're currently investigating an issue affecting the Mobile app that can prevent review comments from being posted on certain pull requests when commenting on deleted lines.
    Feb 6,18:00 UTC
    Investigating -We are investigating reports of degraded performance for Pull Requests
    Feb 6,17:49 UTC
    Incident with Copilot
    Resolved -On February 10, 2026, between 10:28 and 11:54 UTC, Visual Studio Code users experienced a degraded experience on GitHub Copilot when using the Claude Opus 4.6 model. During this time, approximately 50% of users encountered agent turn failures due to the model being unable to serve the volume of incoming requests.

    Rate limits set too low for actual demand caused the issue. While the initial deployment showed no concerns, a surge in traffic from Europe on the following day caused VSCode to begin hitting rate limit errors. Additionally, a degradation message intended to notify users of high usage failed to trigger due to a misconfiguration. We mitigated the incident by adjusting rate limits for the model.

    We improved our rate limiting to prevent future models from experiencing similar issues. We are also improving our capacity planning processes to reduce the risk of similar incidents in the future, and enhancing our detection and mitigation capabilities to reduce impact to customers.

    Feb 6,11:58 UTC
    Update -Copilot is operating normally.
    Feb 6,11:58 UTC
    Update -We have increased capacity and are seeing recovery.
    Feb 6,11:57 UTC
    Update -Opus 4.6 is currently experiencing high demand and we are working on adding capacity.
    Feb 6,11:21 UTC
    Investigating -We are investigating reports of degraded performance for Copilot
    Feb 6,11:16 UTC
    Incident HistoryPowered by Atlassian Statuspage
    GitHub footer

    Subscribe to our developer newsletter

    Get tips, technical guides, and best practices. Twice a month. Right in your inbox.

    Subscribe

    [8]ページ先頭

    ©2009-2026 Movatter.jp