Live Leader

Live Chat 2.0

Login

Member Login

Blog

Blog Navigation

Backing up the Cloud

March 31st, 2008 - Category: Cloud Computing | 2 Comments ».

Cloud based computing is what all the kids are talking about these days. And with good reason. Buying and maintaining your own physical servers is expensive and labour-intensive. To do it efficiently, you need an economy of scale. Providers like Amazon already have one, and they’re renting it out by the hour.

With the latest feature additions to Amazon EC2, it’s even easier to deploy a seriously fault-tolerant web service. You can now programmatically assign “availability zones” to your Amazon servers, to ensure that your service survives in case on availability zone goes down (e.g. because of a fire in a data centre). They’ve also added a long-requested feature, by the way: static IP addresses.

Given the scale and redundancy of Amazon’s infrastructure, your data is stored in one of the most secure locations in the world. There is always that nagging doubt about putting all your eggs in one basket, of course. And, more importantly, there is no way to request backup tapes from Amazon. The whole point is, after all, that they’re no longer needed. But hang on, that’s not the only reason we have backup tapes. I would hazard a guess that the majority of requests for backup tapes are not made because of drive failures. They’re made because someone (you know who you are, sales department) has deleted a file they shouldn’t have.

How is this relevant in the Cloud context? Well, you can think of Amazon Web Services customers as users. And users make mistakes. They delete their own data. Or become virus-infested. Or their software misbehaves. So while the Cloud may be performing stellarly (forgive the mixed metaphor), that doesn’t mean backups aren’t needed.

Fortunately, backing up your Amazon data to another location is easy. At Kalibera, we use two services for storing persistent data: S3 and SimpleDB. They need to be backed up in slightly different ways.

There are already a number of backup tools that work with S3. The catch is that we want to do the reverse of what these tools usually do. They push your data to S3; we want to pull it out. (Some of them might be capable of syncing.) At any rate, it’s quick to write a small script that checks the Last-Modified date of your S3 objects and downloads the ones changed since your last backup. In my script, I append the modified time to the file name so that new versions do not overwrite old versions.

SimpleDB works almost like a relational database, so is logically backed up e.g. to MySQL. To facilitate a simple backup process, we add a little meta-data when we update SimpleDB:

To take account of the (blissfully) dynamic nature of SimpleDB, the MySQL backup script needs to be able to create table structure on the fly, of course. Domain names become MySQL table names. Attribute names become column headers.

With this setup, we can reconstruct the state of our part of the Cloud as it existed at any time, because we’ve saved all historical versions of both S3 and SimpleDB data.

And voila – cheap insurance against our own programming errors, errant end-user deletions and the (arguably unlikely) event of multiple asteroids taking out all of Amazon’s data centres.

2 Comments

  1. Charles - 04.01.2008, 06:10PM

    Hey folks:

    If you are in the mood, how about posting your backup script to the community code section for Amazon SimpleDB?

    Check out:
    http://developer.amazonwebservices.com/connect/entryCreate!default.jspa?categoryID=114

    Pretty Please :)

    Charlie

  2. Kyrre - 04.01.2008, 06:39PM

    Hey Charlie,

    Thanks for your suggestion! We would need to generify it a little for that to be useful, I think. In the meanwhile, I’d be happy to send you excerpts from the (Java) code if you’d like to see how we’ve implemented it. Drop me an email at kyrre@kalibera.com.

    Kyrre

Company Navigation

© Kalibera 2009. Powered by Amazon Web Services