Share with friends and colleagues on social media

6 Reasons Why Software Defined Storage Will Raise your Buying Power

IT has traditionally worked out its storage requirements by looking at data growth levels and making a projection. It has generally paid for increases by replacing old arrays in an asset management life cycle and put new costs into specific projects, for example, paying for a storage cost in a new business process even sneaking it in.However, as the market moves to software defined storage, you should have one storage pool, one storage budget, unlimited scale―and a different conversation with your suppliers. As a result, software defined storage will change the way you budget for storage forever and will raise your buying power.

Enterprise storage costs are complicated. To anyone but storage architects, calculating the total cost of ownership for enterprise storage is a substantial challenge. In fact, it is so complicated that an entire class of analyst has grown up around it. There are no end of major and minor differences between appliance types and architectures, the software running on them, the demands of the applications they are supporting, the associated support costs and service level agreements; even the cost of plugging into the main systems. Then there is data tiering: keeping critical data used on a daily basis close to the application using it on high-performance and high-cost equipment, while keeping the less useful, but often legally required, data (think new GDPR legislation for example https://www.suse.com/c/storage-admins-survival-guide-gdpr/) somewhere as cheap as possible. There is also the data you can’t live without on the system that cannot have downtime, which is replicated in real time. No matter how well managed or de-duped or tiered or stored, enterprise data has a life of its own.

Ordinary consumers could be forgiven for being baffled by enterprise storage pricing. The natural comparison is on cost per TB, as understood by a visit to a PC retailer or a quick search on Google. But enterprise storage costs per TB are greater by an order of magnitude. The reason is that with enterprise data, there’s no such thing as a single instance of data.

Complex dependencies make architects conservative. Imagine for a moment you have one TB of data, and you need to be careful with it so, you store it in a RAID array. Because that data is important, and “failure is not an option,” you make a mirror copy of it. Now one TB just became 2TB. Next, imagine that data is in a SAN. To protect against the chance of a node failure, your data is synchronized to a second SAN node. Two TB just became four TB. So far so good, but what if there is a problem? You need at least one on-site back up, so you take point-in-time copies a couple of times a week. Over a month or so that adds about another three TB, taking the total to seven TB. Next you must deal with the risk of more serious outages. So you make a separate back up at a different site, and even if you’re doing that just once a month, you are adding another one TB. Now, of course, you are going to perform de-duplication to reduce the volume of data, and you are going to be as clever as you can with tiering. However, you are supporting a series of complex processes with critical data, and that makes you conservative. ‘”If it ain’t broke, don’t fix it” sums up the attitude of many an architect. It wouldn’t matter so much if the business gave you a chance to build the systems you really need, but more often than not, storage architecture is as much a product of short-term requirements as long-term planning.

The accidental architecture: it all seemed like such a good idea at the time. Change happens in business. Companies grow by merger and acquisition. New products are launched; old products are retired. New laws mandate new compliance obligations on what data can and can’t be stored, how long for and in what format and with what level of security.

Trying to make a long-term plan in these circumstances is difficult―like trying to shoot a moving target while riding a roller coaster. In an ideal world, enterprise storage systems would be perfectly configured to serve the needs of the business. In the real world this is seldom the case, not because IT is making bad decisions, but because the criteria for those decisions is constantly influx, influenced by uncontrollable external events and often driven by immediate requirements, forth storage you need for the new business process, for new product launch, for compliance with a new law.

Taken one by one, storage decisions look rational. Looked at as a whole, the results of these storage decisions an environment that verges on the chaotic!!!!

Data volumes are going to get exponentially bigger. Amid all this change, there is one certainty: the volume of data will steadily grow where once we dealt in megabytes, we now deal in gigabytes, terabytes, petabytes and Exabytes. Since Gartner analyst Doug Laney coined the defining three Vs. of big data as variety, volume and velocity in 2001, the growth of data has become as certain as death and taxes.. Data no longer grows by percentages, it grows by orders of magnitude.

Storage is complicated, expensive, difficult to maintain and impossible to do without. Following best practice, using best of breed hardware and software for current requirements, often following the advice of analysts and consultants, storage has been built piecemeal into an extraordinary complex environment in its architecture, upkeep and financial liability. It’s hard to budget for, hard to maintain, and impossible to do without.

IT teams have a choice. They use the approach they always have, adding new storage as current circumstances demand and the business drives them. Alternatively, they can use new technology—software defined storage—to re-architect storage to be smarter, less complicated and infinitely scalable. New choices are available that can meet the needs of tomorrow, are cheaper and eliminate proprietary software and hardware vendor lock-in. With IDC predicting that growth will run in excess of 40 percent every year for the next decade, the current costs associated with that growth are unsustainable.

Enter Open software defined storage from SUSE, Powered by Ceph Technology . Software defined storage separates the physical storage plane from the data storage logic (or control plane). This approach eliminates the need for proprietary hardware and can generate 50 percent cost savings compared to traditional arrays and appliances.

SUSE® Enterprise Storage is powered by Ceph Technology, the most popular OpenStack distributed software defined storage solution in the marketplace without question or challenge. It is extensively scalable from storage appliance to cost-effective cloud solution and portable across different OpenStack cloud providers.

Ceph’s foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides your applications with object, block, and file system storage in a single unified storage cluster. This makes Ceph flexible, highly reliable and easy for you to manage. Ceph’s RADOS provides you with extraordinary data storage scalability—thousands of client hosts or KVMs accessing petabytes to Exabytes of data. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously, which means your Ceph storage system serves as a flexible foundation for all of your data storage needs.

As of the Luminous release, Ceph provides industry-leading storage functionality such as unified block and object, thin provisioning, erasure coding and cache tiering and a whole host of enterprise storage services. What’s more, Ceph is self-healing and self-managing. With Ceph’s powerful capabilities, SUSE Enterprise Storage is significantly less expensive to manage and administer than proprietary systems. It will enable you to effectively manage even a projected data growth rate of 40-50 percent in your organization without exceeding your established IT budget.

Conclusions: Open Software Defined Storage will change how you budget, and you are in very safe hands with SUSE Enterprise Storage!!!!

Learn more:

https://www.suse.com/solutions/software-defined-storage/
https://www.suse.com/products/suse-enterprise-storage/
https://www.suse.com/solutions/software-defined-storage/disk-to-disk-backup-storage-requirements/

On a final note SUSE Storage’s partner Ecosystem is rapidly expanding and includes companies such as; HPE, Lenovo, Veeam, VERITAS, Micro Focus, Commvault, ITernity, intel IBM, SAP, SUPERMICRO, CISCO, SEP and Storage Made Easy.


Share with friends and colleagues on social media

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet