SUSE Enterprise Storage 4: when traditional storage is not good enough
‘It seemed like a good idea at the time’ pretty much sums up the picture of legacy storage in the typical data centre. And, like the 2009 rom-com starring Meryl Streep and Alec Baldwin, ‘its complicated’. Very complicated.
If yours is like most enterprises there will be an absolute hotchpotch of appliances, clusters and supporting protocols. Some of which was installed on your watch, some put in by your predecessors, and some put in by colleagues in different business units or organisational silos. You’ll have a wide variety of appliances, protocols, proprietary software providers, tiering of data, and some mission critical stuff its scary to touch.
If you’re in the typical enterprise, you’ll be facing data growth of around 30% a year*, finding that damned expensive, having trouble getting appliances to scale, and having problems with ‘large’ data – files that individually are more than 100GB which make poor bedfellows with traditional arrays. For most organisations, storage is a CapEx drain, a sink of skilled staff and ‘thinking time’, and a barrier to moving towards the software defined data centre; in short, a bit of a drag on agility – period – at a time with IT teams are being hard pressed by the Business to move faster and faster in the age of digital transformation. On top, you’ll likely have concerns about security and governance, challenges in performance and availability, a lot of data silos, and issues with capacity planning, backup, and recovery and archiving. And did I mention the cost thing yet?
When it comes to managing and storing large files, particularly of unstructured data – as so many of us are or soon will be – traditional storage just isn’t good enough. An appliance will set you back millions of dollars (yes all you readers in Britain, the dollar exchange rate is about to force up your costs in the post Brexit world) and can typically deal with around 500TB of data before running out of space; for large data use cases it’s a bit like trying to park your car in a supermarket carrier bag, and a really expensive carrier bag at that.
So if you are in the ‘large’ data world, you’re probably using or considering object storage: all that metadata capability that allows the system to effectively label and categorise unstructured data, the epic scalability, and the capacity to disperse your footprint across the globe. But before you start down the proprietary route, you should actively consider Open Source software defined storage, or, you will very likely find that your latest storage platform is only a partial answer to your problems, is adding to complexity, administration time, and jacking up your CapEx at a time you call ill-afford it.
With SUSE Enterprise Storage 4 there is now unified support for file, block and object – the first distribution of the leading Ceph open source to deliver this. You can run on commodity hardware (we even support 64 bit ARM). When you run out of space you can simply add servers and nodes – true scale out as you never run out of space in your object store. We run to TB, PB and beyond: on cloud principles there are no known limits to capacity. And innovation is coming at breakneck speed, quicker than the proprietary providers.
I’m not saying SUSE Enterprise Storage 4 will solve all your problems – but we do think the case for adoption is strong, we can go a long way to helping you significantly reduce cost, complexity, administration and management time, improve capacity planning, and fit you out for the software defined data centre of the future – one that is also very likely open source.
Talk to the leaders, and park your large data car in the multi-story.