The traditional way we think of data is as something that’s stored and then used later, like electricity in batteries. But today, data is always flowing, and constantly in use, much more like the electricity you pull from a grid than the energy you store in a battery. In the old days, you could wait a day, even a week, to get ahold of data. Today, it needs to be there at the flip of a switch.
This is true across industries. When a competitor’s holiday sale starts eating into your profits, you need data on past sales and margins so you can adjust your prices—before the holiday weekend is over. When a hurricane threatens your shipping operations, you need the data on fuel costs, time lost and more from the last big storm—and you can’t wait for IT to dig that data out of storage. When a customer calls with a complaint, you need to know how much, and how often, that customer has bought from you before—all while the customer is on the phone.
How does any infrastructure accomplish all of this? It’s a combination of many moving parts. Let’s look at three big categories and how they fit together.
The first category is networking infrastructure. It’s so obvious that many don’t think about networking when it comes to data access, but a reliable network offering consistent speeds is as essential for collecting, storing and accessing data as any piece in your data center.
The second category covers databases, applications and the server infrastructure that they run on. Helping ensuring nonstop data access in this category is very much our expertise at SUSE. Our high-availability extension for SUSE Linux Enterprise Server is a clustering solution that can help ensure a fault in one server doesn’t affect the servers or databases running on top.
It’s also worth noting that in the specific category of in-memory databases, we provide support for SAP HANA that is a step above that available anywhere else. SAP HANA has a feature that replicates in-memory data to a secondary system in case the primary system experiences a failure, requiring manual failover. But with SUSE Linux Enterprise Server for SAP Applications, that failover action is automated through the use of two resource agents that continuously monitor the system. Designed to work with SAP HANA system replication setups for scale-up and scale-out deployments, it reduces data recovery time for large in-memory datasets from hours to minutes. It’s automatic and requires no human intervention.
We can complete our tour of the data center with your storage infrastructure. In this category, I’m also lumping the data management and access control tools you likely have in place to channel data from place to place and ensure only the right people have access to it.
A powerful way to ensure data remains available is a software-defined storage (SDS) cluster. An SDS cluster, such as one built on SUSE Enterprise Storage, uses industry-standard server hardware to create a resilient cluster of storage nodes—much like the way our high-availability extension protects your applications and workloads.
Another benefit of SDS is that it makes large-scale storage much more affordable. This means it’s possible to back up more of your data to disk. Traditionally, organizations have relied on tape for cost-efficient backup. But while tape is reliable, it is also makes it difficult to recover data quickly. You may not have lost the data, but if it takes IT days to repair access to it, how much have you benefited? Those days could be crucial. By making disk-based backup cost-effective, SDS allows you to back up your data in a way that keeps it quickly accessible.
When the flow of data through your organization stops, so does your organization’s productivity. Luckily, maintaining data access is possible. By addressing the three kinds of infrastructure that underlay your data access, you can prevent or mitigate issues and ensure that the jolt of data that turns the gears of your productivity never stops.
SUSE Linux Enterprise High Availability Extension 15 SP1
Building nonstop data access is an integral step in achieving a highly available infrastructure. The latest refresh of SUSE Linux Enterprise High Availability Extension enables enterprises to eliminate single points of failure and manage cluster servers in data centers anywhere in the world. It includes:
- Faster time to value with enhanced and continuous data replication via DRBD (Distributed Replicated Block Device), enabling locking and synchronization across multiple systems on the cluster
- Geo Clustering protects workloads across globally distributed data centers and provides rules-based failover for automatic and manual transfer of a workload to another cluster outside the affected area
- Cloud fencing support isolates and protects shared resources when a node malfunctions. When data exists in the public cloud, the user has limited abilities to control the data, but cloud fencing helps when the resources are controlled.
- Easily manage clustered Linux servers and monitor the clustered environment
- Prevent mission-critical application downtime by replicating data across multiple clusters
- Maximize mission-critical service availability in mixed clustering environments of both physical and virtual Linux servers
- Protect against regional disasters with geo clustering for service failover over any distance.
Check out SUSE’s new mini-movie Sam the IT Admin and Business Critical.
Thanks for reading,
Jeff Reser @JeffReserNC