The API Guys
Dark infrastructure-themed banner with status indicators showing grid offline, internet offline, and telecoms degraded during the Iberian blackout
·7 min read·The API Guys

The Iberian Blackout - When the Internet Goes Dark

InfrastructureDisaster RecoveryResilienceSecurityDevOps

On Monday 28 April 2025, at 12:33 local time, the lights went out across the Iberian Peninsula. Within five seconds, Spain lost approximately 15 gigawatts of generating capacity - roughly 60% of its national electricity demand. Portugal followed almost instantly. The result was one of the most significant power system failures in European history, affecting over 60 million people across both countries and parts of southern France.

For those of us who build and maintain digital infrastructure for a living, this was not just a news story. It was a stark, real-world case study in what happens when the systems we take for granted simply stop working.

What Actually Happened

The blackout was triggered by a series of small grid failures concentrated in southern Spain. These cascaded rapidly through the network, causing a chain reaction that overwhelmed the system's ability to compensate. Two major voltage fluctuations occurred in quick succession. The second was severe enough to disconnect Spain's power system from the wider European grid entirely, collapsing the Iberian electricity network within moments.

At the time of the incident, over half of Spain's power supply was coming from solar generation, and electricity prices were slightly negative. Several large conventional generators were offline for seasonal maintenance. The Iberian grid's relatively weak interconnection with France - just 2.8 GW of import capacity - meant there was limited external support available when things went wrong. The system lacked the rotational inertia that traditional synchronous generators provide, which is critical for absorbing sudden shocks and maintaining frequency stability.

A cyberattack was ruled out by both the Spanish government and Red Electrica de Espana, the national grid operator. The official report, published in June 2025, attributed the failure to a combination of technical, structural, and planning errors. Insufficient synchronous generation, voltage instability in the days leading up to the event, and poor contingency planning by grid operators all played a part.

The Digital Fallout

What made this event particularly relevant to anyone working in technology was the scale of the digital disruption. This was not just a case of the lights going off. The entire digital infrastructure of two modern European nations went dark simultaneously.

Internet traffic across Spain and Portugal dropped by approximately 75-90% within minutes. Mobile networks failed as base station battery backups were rapidly depleted. Landline communications went down. ATMs and electronic payment systems stopped working entirely - if you did not have cash on you, you could not buy anything. The Madrid metro had to be evacuated. Hospitals switched to emergency generators to keep critical systems running.

Even satellite internet was not immune. Starlink's service in Spain went offline for over 16 hours because its local point of presence in Madrid lost power, and while traffic was eventually rerouted through London and Milan using inter-satellite laser links, the disruption was significant. Submarine cable landing points in Portugal meant that countries as far away as Angola experienced connectivity issues.

Perhaps most telling was the behaviour observed in the cybersecurity space. Monitoring by NETSCOUT showed that while legitimate internet traffic collapsed, malicious infrastructure rebounded almost immediately once services began to be restored. DDoS attack targeting remained constant throughout the outage. The threats did not take a day off just because the grid did.

Why This Matters for Every Digital Business

It is easy to think of a power grid failure as someone else's problem - something for energy companies and governments to worry about. But for anyone running a web application, an API, an e-commerce platform, or any cloud-hosted service, the Iberian blackout exposed uncomfortable truths about our collective dependency on assumptions we rarely question.

The most fundamental assumption is that the internet will simply be there. We build systems on the premise that DNS will resolve, that CDNs will serve content, that payment gateways will process transactions, and that our servers will have electricity. When all of those things disappear simultaneously, the question is not whether your code is well-written. It is whether your architecture can degrade gracefully when entire regions go offline.

Lessons for Infrastructure Planning

There are practical lessons here that apply whether you are running a Laravel API serving thousands of requests per minute or a small CMS-driven website for a local business.

Geographic redundancy is not optional. If your entire infrastructure lives in a single region, a regional event can take you completely offline. This does not mean you need to run active-active across three continents. But it does mean thinking seriously about where your primary and failover systems live, and whether they share common failure points. If your primary server and your backup are both in the same data centre - or even the same country - you have a single point of failure that you might not have considered.

Test your disaster recovery plan, not just your backups. Many businesses have backups. Far fewer have actually tested restoring from them under pressure. Even fewer have tested what happens when their primary DNS provider, their CDN, their payment processor, and their monitoring tools all go down at the same time. The Iberian blackout showed that cascading failures do not respect the boundaries we draw around our systems.

Think about your dependencies. Every third-party service your application relies on is a potential point of failure. Payment gateways, email delivery services, SMS providers, analytics platforms, authentication services - each of these has its own infrastructure, and that infrastructure lives somewhere physical. Do you know where? Do you have fallback options if they disappear?

Consider offline-capable architectures where appropriate. Progressive web apps, local caching strategies, and offline-first design patterns are not just nice-to-have features for spotty mobile connections. They are resilience strategies. If your application can continue to function in some capacity without a network connection, your users are better served when the unexpected happens.

Have a communication plan that does not rely on the systems that just failed. During the Iberian blackout, businesses could not reach their customers, their teams, or their hosting providers because the same infrastructure that was down was the infrastructure they relied on for communication. A simple, documented plan that covers how you communicate when your primary channels are unavailable is worth having.

The Broader Resilience Question

The Iberian blackout also raises a broader question about how we think about resilience in an increasingly connected world. As more of our critical infrastructure moves to the cloud, as more devices become dependent on constant connectivity, and as more business processes become entirely digital, the blast radius of infrastructure failures grows larger.

This is not an argument against cloud computing or digital transformation. It is an argument for doing these things thoughtfully, with an honest assessment of what can go wrong and a realistic plan for when it does. The companies that recovered fastest from the Iberian blackout were not the ones with the most sophisticated technology. They were the ones that had planned for failure and practiced their response.

One of the less discussed impacts was on development teams themselves. Companies with engineers based in Spain and Portugal lost an entire working day - and in many cases longer, as the disruption to daily life extended well beyond the power being restored. If your entire development team is in one location, even a temporary regional disruption can halt all progress. Distributed teams, while harder to coordinate, offer natural resilience against localised events.

What We Are Doing About It

At The API Guys, this event prompted us to review our own infrastructure practices and those we recommend to our clients. We have always advocated for multi-region deployment strategies and automated failover for the Laravel APIs and web applications we build. But the Iberian blackout reinforced that these are not premium extras - they are baseline requirements for any system that matters.

We are also paying closer attention to the physical infrastructure layer that sits beneath our cloud abstractions. Understanding where your cloud provider's data centres are, how they are powered, what their backup arrangements look like, and how they connect to the wider internet is not over-engineering. It is due diligence.

If you would like to discuss your application's resilience posture, or if the Iberian blackout has made you think twice about your disaster recovery planning, get in touch with us. It is always better to have these conversations before the lights go out.

Ready to Start Your Project?

Get in touch with our Leeds-based team to discuss your Laravel or API development needs.