Collapse group
Container builds
us-east-1 - Operational
us-east-1
Previous page
Next page
eu-central-1 - Operational
eu-central-1
GitHub Actions
Depot-managed Actions Runners - Operational
Depot-managed Actions Runners
Github.com - Actions - Operational
Github.com - Actions
API - Operational
Website - Operational
The queue backlog is clear, build start time has returned to fast.
We are monitoring the rollout of the fix, we are currently waiting on a queue backlog
We have deployed a fix, the error has cleared
We are investigating an issue causing some builds to fail with the message "failed to read dockerfile: failed to load cache key: no handler for git".
We've deployed a fix and will continue to monitor the results.
We implemented a fix and are currently monitoring the result.
We are currently investigating this incident.
We've deployed a fix to address this and are continuing to monitor it.
We are currently investigating an issue that is causing some builds to fail to start.
We have deployed a fix, build service is restored.
We are currently investigating an issue preventing build machines from starting.
A database fix is deployed, API service is restored
We have identified a database issue as the outage cause and are working to remediate.
We are investigating an API outage.
This incident has been resolved, build service is restored
This capacity issue only exists in the us-east region, and we have capacity available in the eu-central region. You can change your region via the project settings.
We have identified the issue and are working with our hosting provider to get additional capacity.
We are currently investigating an issue that is causing builders to fail to launch.
This incident has been resolved.
All builds are now using primary infrastructure, we are monitoring to ensure that all service is restored.
AWS is reporting that the us-east-1 outage has cleared, we are routing builds back to primary infrastructure.
AWS is experiencing an outage in us-east-1 (https://health.aws.amazon.com/health/status). We are attempting a failover to backup infrastructure in another region.
AWS is still experiencing issues in us-east-1, we are prepping to fail over to backup infra.
AWS us-east-1 is currently rate-limiting infrastructure updates, we are monitoring.
We are investigating lag in acquiring build machines, causing some build requests to time out.
Build service is restored.
We're currently investigating an issue causing builds to fail to start.
Database service is restored, we are working with our database provider to understand the cause of the outage.
We are investigating reports of intermittent database connectivity issues.
The fix has deployed, build service is restored.
We have identified the cause and are deploying a fix.
We are investigating an incident preventing some build machines from launching.
Build service is operational. We are continuing to work with Google Cloud to ensure they have resolved the issue.
Google Cloud started experiencing periodic load balancer failures about 5 hours ago. These failures are causing intermittent issues with build machines acquiring build information. We are monitoring the situation and are working with GCP support to resolve the issue.
Build service is restored, we will continue to monitor for any issues.
The fix has deployed, we are manually restarting any build machines that were affected to apply the fix.
We have identified the issue and are deploying a fix.
We are investigating reports of issues acquiring build machines.
Jul 2023 to Sep 2023
Next