“The greatest obstacle to discovery is not ignorance. It is the illusion of knowledge.”
I’ve seen the great quote above attributed to a number of famous scientists, from Stephen Hawking to Albert Einstein. It does sound like a science quote. It was, in fact, stated by Daniel Boorstin, who was the Librarian of the United States Congress from 1975 to 1987. I would modify that statement to make it relevant in the 21st century IT world: The greatest obstacle to business resiliency is not the lack of a high availability or disaster recovery solution, but the untested illusion of having either.
Recently, British Airways had a catastrophic systems outage that shut down all travel at Heathrow and Gatwick airports and subsequently affected the travel of 75,000 people worldwide. British Airways released a statement saying, “There was a total loss of power at the data centre. The power then returned in an uncontrolled way, causing physical damage to the IT servers. It was not an IT issue; it was a power issue.”
British Airways CEO Alex Cruz stated that the power supply failure affected systems for check-in, booking, baggage handling, customer contact centers, and more. Once the power surge occurred, failing over to the backup system did not work.
British Airways operates data centers at two sites. Assuming one site is a primary and the other a secondary is just that: an assumption. It’s entirely possible that each data center provides completely different service. Nobody has asked so far if they had a power surge in both locations. It’s obvious the answer would very likely be no, but it would be a great question to ask just to hear the answer.
If they did have a primary and a secondary site, a number of questions come to mind. Why didn’t the failover to backup systems at the secondary site work? Do they test that type of failover? How often? When was the last time they did it? Have they had success until now? If so, what changed?
If the site affected by the power surge contained the backup computer systems, then that leads to more questions. Why are the primary and backup systems in the same physical location? Are they in the same rack? Are they virtualized servers on the same hardware?
Personally, and this is entirely speculation, I would imagine the most likely scenario is that the primary and backup systems affected by the outage are at least in the same building…or were at the time. Remember Occam’s Razor: “Pluralitas non est ponenda sine necessitate,” or “Entities should not be multiplied unnecessarily.” In practice, it means that the simplest or most obvious explanation is usually the likely one. Don’t make mysteries where none exist. Most likely, British Airways had all affected systems (both primary and secondary) in one data center that had some kind of power issue. Potentially, they had the backup systems in another data center, but when they went to cut over to the backup, it failed because the backup site was inadequate, or not properly tested, or perhaps both. Either way, whatever the plan was, and that’s assuming they had a plan, it did not achieve the desired result of a redundant system.
Cruz went on to provide much hubris in stating, “This will not happen again at British Airways.” I’d like to bring that statement down to a far different level of technical sophistication, more commonly known as “keeping it real.” If I were the CEO or CIO, then I would want to know exactly what happened and what the plan would be to ensure that this event was not reproducible. Has anyone in the media asked, “What could have been done to prevent this?” That right there is the question of the day. This separates good IT from not-so-good IT, or a good business from a not-so-good business. If a restaurant sent out a plate with a Band-Aid in the middle of it, the question of how to prevent that in the future would spawn solutions such as wearing latex gloves to prep food and having the server eyeball every plate that goes out. The primary solution is to prevent a bandage-adorned chicken parmesan from getting into the hands of the customer, and the secondary solution is to provide a backup in case the primary fails. That translates well for just about any business or IT problem.
In my years as a technician or in IT management, I’ve seen a number of events that would make your hair curl. I’m sure you have, too.
I once worked for a company with a propane generator to power the computer systems. In case of a power outage, the UPS would carry us over until the generator kicked in. They tested the generator every week for about 18 months. During those tests, the generator ran for approximately 30 seconds and then shut off. It never ran long enough or hot enough to blow the motor. The first time they lost power, the generator lasted for maybe 10 minutes and then ground to a stiff halt because nobody was assigned the job to add motor oil, let alone check the level. Since nobody was assigned the job, the only person accountable was the outsourced project manager who’d been long gone. That generator failure caused three days of systems outage because most of the servers went down hard, and it was disk drive replacements and backup tapes for the loyal IT staff.
Then there was the time when we moved server rooms and the young buck we just hired was tasked with plugging all the server gear into the brand-new UPS with dual power supplies. He learned a lesson that day after he plugged everything into one side of the UPS and we powered all the servers up. Can you say UPS breaker overload? The systems all went down hard, with a number of disk drive failures. IBM had to make a late-night DASD delivery. Yours truly had to rebuild the system from tapes once again. You want to talk about disaster recovery testing? At that shop, we had so many disasters that we didn’t even need to test, although we did it regularly.
Another company I know had an air conditioning system on the roof. It was rated to perform in -10 Celsius. I live in Canada. We have snow. It gets really cold. I’ve been out in -10 with shorts on to run to the mail box. The air conditioning would usually work until it got to -20 and then it would just seize up. Or the snow would pile up against the air conditioner intake and it would just suffocate it. They lived with this problem, air conditioning outage after outage, while leaving the techs to coordinate industrial fans to blow the hot air out of the room until the weather warmed up. Machines can function at 29 degrees Celsius, but they’re not happy when they do.
I recall a compatriot in a sister company who had an 8’x8’ server room with a rubber dust protector on the bottom of the door, effectively sealing the room to be airtight…and perhaps watertight. They had a rack of servers with a tarp above it, protecting the rack from the sprinkler system 18” above the tarp. My thought at the time was “Forget the tarp. How long would it take the room to fill with water high enough to reach the rack’s power supply?” You could just see that disaster coming.
To be fair, no business can plan for everything. There are events that defy the anticipation of most professional planners. If you take the events of September 11, 2001, you will find that there are circumstances that even the best teams could not plan enough to overcome. Data centers in Lower Manhattan could operate for a few days on generator power in the aftermath of the World Trade Center attacks; however, no one predicted that fuel delivery into the city would be a major security concern until it was, subsequently causing fuel shortages. As well, data centers had issues with overheating generators due to the extremely poor air quality caused by the destruction of the twin towers. The shock and chaos following the events of that day further compounded any of the clinical-sounding logistical concerns we write about in hindsight, but when we talk about it, I guarantee you the emotion of that day still consumes the voice of anyone with a heart.
Good planners learn from every disaster and apply that knowledge and experience moving forward. Data centers all over the world, whether they’re a major cloud hosting provider or a small server room of a hospital, not to mention utility services like power and water or even trucking/logistics operators, have learned lessons and applied knowledge to plan for disasters not unlike what happened on 9/11 or natural disasters like the tsunami in Fukushima, Japan.
In the last few weeks, we’ve seen tremendous issues caused by malicious code. The Wannacry incident was the biggest outbreak in history, with about 300,000 machines infected. When the rubber meets the road, it’s not a virus issue. It’s a disaster recovery issue. You can get further by planning for disaster recovery than you can by fighting malware, unpatched systems, and poor user training. The disaster recovery effort starts with a breakdown of any or all three of the aforementioned vulnerabilities. Antiviral software is for the most part a reactive solution. Patching is a reactive solution to a software or operating system vulnerability. User training is proactive; however, it is the largest struggle to overcome. Even if you get a high amount of computer or even security literacy in your organization, all you need is one mistake to set forth a chain of events, especially if that user has higher-than-desired authority on the network or systems. Humans will make mistakes, but we need to ensure we are properly protecting our businesses with antiviral software and regular patching to compensate as much as we can for that.
A properly tested disaster recovery solution must be in place to ensure that you can get your business up and running. The question of “if” or “how” we can get it back shouldn’t have to be asked. You need to know the “if” and the “how.” Furthermore, you need a secondary “how” in case the primary fails. The question of “when” is dependent on your organization’s tolerance for downtime.
Again: “There was a total loss of power at the data centre. The power then returned in an uncontrolled way, causing physical damage to the IT servers. It was not an IT issue; it was a power issue.”
Replace “power” in “power issue” with “cooling,” “water,” or “fire.” These are common concerns in any data center. It’s quite obvious to see the weak argument. This is an IT executive management issue. And in any large and publicly visible organization, IT must be aligned with the business enough to foresee and overcome the loss of a data center due to common operational concerns such as electricity, air conditioning, flood, or fire.
LATEST COMMENTS
MC Press Online