3 Reasons Why the Future of Storage is Open Source and Cloud

Share
Share

As predicted by a number of key analysts  the market has significant growth in software defined storage during 2016— and with solid reasoning. The capacity to pool storage across different arrays and applications is the latest wave in virtualization and is beginning to have the same impact on the cost and upgrade cycle for storage as it has had on servers.

While the growth of software defined storage isn’t good news for traditional  array vendors, it is very good news for IT teams, who stand to gain unlimited scale, reduced costs and non-proprietary management. Open source means the end of vendor lock-in and true cloud portability. The future of storage is open source and cloud. Here are three reasons why:

  1. What IT teams have learned to expect from virtualization: reduced cost, reduced vendor dependence. A decade or so ago, date centers looked very different from today. Each application had its own servers and storage working in a series of technological islands, with each island provisioned with enough processing power to run comfortably during peak demand. Inevitably, making sure systems ran comfortably at peak meant over-provisioning: the processing requirements had to be based on the worst case scenario, effectively providing for seasonal peaks like Christmas all year round. For years IT added application after application, requiring server after server, rack upon rack, over an ever greater floor space, running up an ever increasing electricity bill from epic power and cooling costs. This was so great that some companies could use the data center to heat their buildings, and others placed data centers in the cold air of mountain sides to reduce costs.

    With every server added, the amount of idle processing power grew until the unused potential became massive. The effect was somewhat like placing a dam across a mighty river: the tiniest trickle of water escapes in front while the energy potential building in the lake behind grows and grows. Virtualization opened the sluice gates, unleashing a torrent of processing power that could be used for new applications. This meant power on demand at the flick of a switch, fast provisioning, doing more with less, lower energy bills, a reduced data center footprint and the severing of the link between the software supplier and the hardware. Expensive proprietary servers were out; commodity servers differentiated only by price were in. In this world the best server is the cheapest because now they are all the same. Best of all, there’s a huge drop in the number of new physical servers required. And with all that unused potential available why add more?

    Virtualization became a “no brainer,” a technology with a business case so sound, so obvious, so clear that adoption was immediate and near universal. For the IT team, it means making better use of IT resources, reducing vendor lock-in and, above all, cost savings. Put the v-word in front of anything, and IT expects the vendor to show how they are going to be able to do more with less, for less. Years of experience and working best practice have led IT teams to make virtualization synonymous with cost reduction. Storage is no exception. Any vendor talking storage virtualization while asking for increased investment is going to have a very short conversation with their customers.

  1. Storage virtualization disrupts traditional vendor business models. While IT has reaped the benefits of better resource use and cost reductions, this has come at the expense of sales for vendors. As adoption of server virtualization took off, server sales plummeted, moving from a steady gain in volume and value every quarter to a catastrophic drop. In 2009, with the recession in full force (itself a significant driver of virtualization for cost savings) analysts at IDC recorded the first ever drop in server sales. All the big players, HP, IBM, Dell, Sun Microsystems (now Oracle) and Fujitsu, recorded huge decreases in sales, between 18.8 percent and 31.2 percent year over year. The impact was softer in high power CISC and RISC segments where it was tougher for IT teams to change vendors (e.g., with mission critical Oracle applications where licensing costs were tied to the number of processors in use or specific hardware), but especially severe in the lower end x86 market.

    Changes followed suit. IBM exited the commodity market altogether, selling out to Lenovo, which with a cheaper manufacturing base built on lower wages and controlled exchange rates were in a better position to win. HP endured a merry-go-round of revolving door CEOs and successive re-inventions, and Dell went private. This pattern of disruptive change is set to follow into the storage marketplace. When even the very largest suppliers suffer in this way, an expectation builds of disruptive, game-changing technology. IT buyers stop looking at brand in the same way. Where there used to be a perception of safe partners with a long-term, safe product road maps and low risk, there is now an expectation that the older players are going to be challenged by new companies with new approaches and technologies. The famous 70s slogan “no-one ever got fired for buying IBM” doesn’t hold water when IBM shuts up shop and sells its commodity servers business. IT buyers expect the same disruption in storage, and they are right to do so.

    In this environment, the status quo for storage vendors cannot hold. The big players are nervously eying each other, waiting for the deciding moves in what adds up to a game of enterprise business poker with astronomical odds. The old proprietary business model is a busted flush, and they all know that sooner or later someone will call their bluff on price and locked-in software. A new player in the game, or even somebody already at the table, is going to bring the game into a new phase—or, as distinguished Gartner analyst and VP Joe Skorupa put it, “throw the first punch” in 2016.

“Smart storage buyers need data portability to have an exit plan, and open source provides it. Storage powered by Ceph is easily transferred across hundreds of different providers, including Amazon.”

  1. Cloud makes the case for open source compelling because data must be portable. Just at the point where server sales might have been expected to recover, IT teams discovered the cloud. Why bother maintaining an enormous hardware estate with all the hassle of patching and managing, upgrading, retiring and replacing if you can offload that workload cost-effectively onto a third party and so free up time to concentrate on more rewarding activity? For ambitious CIOs wanting to generate business advantage for the board, “keeping the lights on” in the data center is a distant priority. It’s no wonder more and more infrastructure is moving into the cloud, and with it, data. And with the data goes storage.

    IT teams who want to avoid being locked into cloud suppliers need to think carefully about how they exit one provider and move to another. Smart buyers need to play suppliers off against each other, compare prices and offerings and choose whichever is the best fit for current requirements, knowing that those requirements can change. A better offer can come along, and, if you are going to be in a position to seize on it, you must be able to exit your current supplier without a disruptive, costly and risky migration. If this goal is to be achieved, data must be portable.

    Smart storage buyers need data portability to have an exit plan, and open source provides it. Storage powered by Ceph is easily transferred across hundreds of different providers, including Amazon.

    Enter software defined storage from SUSE® powered by Ceph.

    Software defined storage separates the physical storage plane from the data storage logic (or control plane). This approach eliminates the need for proprietary hardware and can generate 50 percent cost savings compared to traditional arrays and appliances.

    SUSE Enterprise Storage is powered by Ceph technology, the most popular OpenStack distributed storage solution in the marketplace. It is extensively scalable from storage appliance to cost-effective cloud solution. It is portable across different OpenStack cloud providers.

    Ceph’s foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides your applications with object, block and file system storage in a single unified storage cluster. This makes Ceph flexible, highly reliable and easy for you to manage.

    Ceph’s RADOS provides extraordinary data storage scalability—thousands of client hosts or KVMs accessing petabytes to exabytes of data. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously. This means your Ceph storage system serves as a flexible foundation for all of your data storage needs.

    Ceph provides industry-leading storage functionality such as unified block and object, thin provisioning, erasure coding and cache tiering. What’s more, Ceph is self-healing and self-managing.

    With Ceph’s powerful capabilities, SUSE Enterprise Storage is significantly less expensive to manage and administer than proprietary systems. It will enable you to effectively manage even a projected data growth rate of 40–50 percent in your organization without exceeding your established IT budget.

 

 

Share
(Visited 4 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet

Avatar photo
5,284 views