Collapse group
Container builds
us-east-1 - Operational
us-east-1
Previous page
Next page
eu-central-1 - Operational
eu-central-1
GitHub Actions
Depot-managed Actions Runners - Operational
Depot-managed Actions Runners
Github.com - Actions - Operational
Github.com - Actions
API - Operational
Website - Operational
GitHub has resolved their incident. Jobs running on Depot never had above normal queue times, indicating that the outage impacted systems Depot is no longer reliant on.
We are monitoring an incident with GitHub Actions: https://www.githubstatus.com/incidents/cbdzqm5fw0fmCurrently, we are seeing minimal disruption to queue times for jobs running on Depot, but will continue to monitor for any impact.
We are seeing queue times resolved and normal processing again.
We are currently investigating a subset of GitHub Actions jobs that are queueing for longer than normal.
We are seeing queue times decreasing to normal levels. We are monitoring.
We are currently investigating alerts that Ubuntu 22.04 jobs have increased queue time.
All jobs are processing normally.
We are investigating GitHub jobs taking longer to start.
We have brought container builds and GitHub Actions back to normal capacity. We appreciate your patience and will have a full post-incident on our blog in the coming days.We will continue to monitor for anything that may come up.
We're seeing queue times beginning to return to normal for GitHub Actions.Some container builds are failing to launch. We're investigating these, but in the meantime, you can try resetting your cache to clear out the bad state.
We have brought the system back to normal capacity, and backlogs are caught up for primary regions. Additional regions are being brought back up to normal capacity at the moment.We will continue to monitor recovery.
We've added an additional workaround for an AWS CPU issue that is currently backing up the system under load. We're working to bring the system back to a normal state where the workaround can be removed so normal processing can restart.
We continue to work through bringing the system back to full capacity as we are processing a large backlog of work. You will see your jobs are still queued while we build out slack in the system to bring things back to full capacity.
We have rolled out an initial fix and are seeing initial recovery for container builds and github actions.
We have identified the root cause of the problem and are rolling out a fix across Depot now.
The stuck jobs have all processed successfully.
We are currently observing a group of GitHub jobs that have not yet started processing on runners. New jobs seem to be unaffected. We are investigating.
We are investigating longer queue times for Windows GitHub Actions jobs.
We are observing queue times returning to normal. We will continue to monitor.
We are continuing to see recovery but are continuing to monitor the remaining outliers.
We have deployed a fix and are monitoring for queue time recovery.
We've discovered a failure in our task scheduler that is blocking operations. We're deploying a fix that should unblock jobs.
We are currently investigating possible delays in Actions jobs starting.
Queue times have returned to normal.
Build service is restored. The underlying cause was an issue with a certificate / system time in the macOS VM image itself. We have deployed a working image to all macOS hosts.
We are currently deploying a fix to all macOS hosts, build service is beginning to be restored.
We have identified the potential root cause and are working on populating a fix out across the fleet of compute for macOS runners.
We are currently investigating an issue with macOS runners for GitHub Actions not launching.
GitHub have marked their incident as resolved.
We are currently monitoring the ongoing GitHub outage that is impacting GitHub Actions and other services: https://www.githubstatus.com/incidents/lb0d8kp99f2v
Mar 2025 to May 2025
Next