We've deployed a fix to address this and are continuing to monitor it.
We are currently investigating an issue that is causing some builds to fail to start.
We have deployed a fix, build service is restored.
We are currently investigating an issue preventing build machines from starting.
A database fix is deployed, API service is restored
We have identified a database issue as the outage cause and are working to remediate.
We are investigating an API outage.
This incident has been resolved, build service is restored
This capacity issue only exists in the us-east region, and we have capacity available in the eu-central region. You can change your region via the project settings.
We have identified the issue and are working with our hosting provider to get additional capacity.
We are currently investigating an issue that is causing builders to fail to launch.
This incident has been resolved.
All builds are now using primary infrastructure, we are monitoring to ensure that all service is restored.
AWS is reporting that the us-east-1 outage has cleared, we are routing builds back to primary infrastructure.
AWS is experiencing an outage in us-east-1 (https://health.aws.amazon.com/health/status). We are attempting a failover to backup infrastructure in another region.
AWS is still experiencing issues in us-east-1, we are prepping to fail over to backup infra.
AWS us-east-1 is currently rate-limiting infrastructure updates, we are monitoring.
We are investigating lag in acquiring build machines, causing some build requests to time out.
Build service is restored.
We're currently investigating an issue causing builds to fail to start.
Database service is restored, we are working with our database provider to understand the cause of the outage.
We are investigating reports of intermittent database connectivity issues.
The fix has deployed, build service is restored.
We have identified the cause and are deploying a fix.
We are investigating an incident preventing some build machines from launching.
Build service is operational. We are continuing to work with Google Cloud to ensure they have resolved the issue.
Google Cloud started experiencing periodic load balancer failures about 5 hours ago. These failures are causing intermittent issues with build machines acquiring build information. We are monitoring the situation and are working with GCP support to resolve the issue.
Build service is restored, we will continue to monitor for any issues.
The fix has deployed, we are manually restarting any build machines that were affected to apply the fix.
We have identified the issue and are deploying a fix.
We are investigating reports of issues acquiring build machines.
The fix has deployed, we are monitoring but all builds are currently succeeding.
We have identified the cause and have deployed a temporary fix.
We are investigating reports of issues with some builds failing to acquire builders
Database service is restored, builds are functional again.
Database service is beginning to restore, we are monitoring the progress.
Our database provider is still offline, we are continuing to monitor.
We are currently investigating an outage with our primary database provider, all services are affected.
Jul 2023 to Sep 2023