Welcome to the Age of Hardware

14 August, 2012 CloudDeviceDotNetNukeHardwareHostingTechnology

– Jeffrey J. Hardy

A few thoughts on the state of the technology
Back when I started writing about technology the drives were still floppy, windows were for light and ventilation, and Steve Jobs was still fired from Apple.  Back then, computer drives failed all the time.  I remember one time a staff member shut down a desktop PC and forgot to “park” the hard drive.  Parking a hard drive entailed following a series of commands to make sure the hard drive didn’t get damaged when you moved  the PC around (C: hdpark).  Seriously.  The staff member bumped the cart with the PC against the wall.  The computer never fired up again and the data—all 2.8 megabytes—was lost.

Back in the day, right?

Anyway … back in those days of big, clunky computers and servers with tiny amounts of storage, backups were not only important, they were necessary.  There was an odds-on probability that you were going to need a backup at some point to restore your data.  So software programs were invented to copy the data—usually to tape.  I would not be surprised if some of the younger tech professionals around do not know what tape, in this context, is.  If your concept of recording music starts with the CD or MP3s, this world will seem completely alien to you.  First we used stone slabs, then clay tablets, then papyrus, and then finally magnetic tape as we wrote articles and blogs about pterodactyls and wooly mammoths.    

For backups, we mostly used Backup Exec.  Now the odd thing about Backup Exec is that, despite the intent clearly stated within its name (“Back up” being integral), one was never truly certain if a backup was completely achieved.  It was the data equivalent of a Victorian Christmas morning.  If a hard drive failed, you had to rush to the Christmas tree to see if the digital Saint Nicholas left you a goose as big as yourself or a lump of coal.  You either had to face the prospect of spending all day restoring the precious megabytes of a marketing presentation that no one would listen to while your boss screamed about the lack of productivity or discover the metaphorical coal lump of a failed back up.

But it was all that we had, so we liked it (as we wrote code while walking uphill in the snow, both ways).  Backup Exec and its peers failed often, but they were marginally more reliable than the hardware they supported.  Life was a bity better with them than without them.

But this is just not true anymore
We are now in the age of tiny computers with massive amounts of storage.  Heck, I have a 1 Terabyte drive in my home office that is smaller than a paperback book, purchased new for $119.00. But here is the real thing … all of this storage is really reliable.  If a drive fails these days—and they still do on occasion—the vast majority of the time it is because it is really old, the cooling system failed, there was bad code or errant configuration interjected, the computer/server was being horribly mismanaged, or because a cheap/knock-off drive was used in the first place.  It may sound like I am splitting hairs here, but I am not.  

This is a major change in the market that we all accept, but are rarely fully aware of with our conscious minds.  The simple fact is that the drives themselves are no longer the weak link.  In fact I am willing to state that, generally speaking, we have so improved the drives that they are the strongest link in the data protection chain.  This is especially true of premium, production-quality servers and workstations. 

What prompted me to write this?
PowerDNN has been investing heavily in SAN-based cloud arrays over the last few years.  This stack looks very impressive in the cabinet.  It starts with a 96 or 48 Terabyte SAN storage array at the bottom in a hyper-redundant RAID configuration with self-protecting Snapshot backups (also RAID protected).  Depending on the services offered in the stack and the number of customers in this private cloud, there will be 2 to 3 dozen physical servers above it with 100+ processors, 4 to 8 switches (internally and externally redundant), and 1 or 2 firewalls.  It is an impressive sight indeed. 

My point is that, barring a human-introduced error that ignores the laws of physics, this set up is a tank that prevents any data loss even in the mathematical improbable case of multiple concurrent failures of servers, storage drives, and/or switches.  In fact, even in the highly unlikely event of multiple concurrent component failures, the likely result is that no service would be lost and no customer would even notice at all.

Impressive is as impressive does
This impressive array of technologies and built-in redundancy is hardware-based.  Now, I have heard about and read incident reports about problems with such structures.  But the problems, in my experience, have been caused by multiple human errors—or outright greed in the case of hopelessly overloaded SANs.

Free tip for our would-be competitors: No matter what people tell you at hosting conferences and vendor seminars, the value of these things is supposed to be in the performance and efficiency (customer benefit) not the density (your benefit).  Just saying.

To the point at hand, with our set-up products like Symantec Backup and (heaven help us) Backup Exec are just not as relevant as they once were.  Now the argument can successfully be made for additional backup protocols to achieve geographic diversity.  Certain international organizations, such as medical companies and governmental agencies, are required to have a geographically diverse contingency plan.  Fair enough.  We have two datacenters in Omaha, one in Manchester (UK), and one in Australia to fill that need.  And geographic diversity provides a measure of protection for regional catastrophic events (hurricane, earthquake).  I also think that taking a conventional backup of a website makes a lot of sense from a hacker/DDoS perspective—especially in some other over-crowded hosting environments.

We must recognize that for the vast majority of us, however, the current state of hardware technology is the strongest link in the data protection chain. It reduces third-party software backup solutions to little more than added costs, complexity and un-needed points of failure for the great majority of the use cases.

Tags:  , , ,