Flexible, policy-driven clustering and continuous data replication – boost flexibility while improving service availability and resource utilization by supporting the mixed clustering of both physical and virtual Linux servers. Protect workloads across globally distributed data centers.
Corosync and OpenAIS are clustering and distributed systems infrastructure components. OpenAIS, the Open Source Initiative’s certified implementation of the Service Availability Forum Application Interface Specification, is used for its clustering messaging and membership layer, and is the leading standards-based communication protocol for server and storage clustering. The Corosync cluster engine provides membership, ordered messaging with virtual synchrony guarantees, closed process communication groups and an extendable framework.
Pacemaker is a highly scalable cluster resource manager with a flexible policy engine that supports n-node clusters. Using Pacemaker, you can continuously monitor the health of your resources, manage dependencies and automatically stop and start services based on rules and policies that are easy to configure.
Rules-based failover for automatic and manual transfer of a workload to another cluster outside of the affected area. Your mission-critical workloads are transferred away from the affected region to continue to run and protect workloads across globally-distributed data centers.
Mixed clustering of both physical and virtual Linux servers to boost flexibility while improving service availability and resource utilization.
HAProxy is a very fast and reliable solution that offers high availability, load balancing and proxying for TCP and HTTP-based applications by spreading requests across multiple servers. It complements the Linux virtual server load balancer.
Cluster Join enables effortless cluster setup and expansion of existing clusters. Once you build a node or cluster, you can add new nodes without the need to manually replicate configurations.
Storage-based fencing that eliminates single point failure.
Metro area clusters provide failover across data center locations as far as 30 kilometers apart.
Templates and wizards to help quickly complete basic setup tasks.
Cluster Bootstrap offers a menu-driven setup process for rapidly deploying a base cluster.
Geo clustering delivers failover across unlimited distances and protects against regional disruptive events.
Setup, administration, management and monitoring – save time by easily managing clustered Linux servers with a powerful unified interface to quickly and easily install, configure and manage clusters, plus a simple, user-friendly tool for monitoring the clustered environment.
Powerful unified interface, HAWK (High Availability Web Konsole), saves time and easily manages and monitors clustered Linux servers to quickly and easily install, configure and manage clusters – plus a simple, user-friendly tool for monitoring the clustered environment. HAWK can be used to manage Pacemaker HA clusters. The web console supports full cluster administration, such as adding resources, constraints and dependencies. You can also use it to manage groups of resources, which improves scalability on large clusters. Additionally, you can access control lists, the cluster test drive and a graphical history explorer.
Access control lists align cluster management with your processes and policies. By applying role-based controls you can ensure only IT staff with the proper authority can access cluster management tools. Restricting access not only helps maintain security, it also improves cluster reliability by limiting mistakes.
Cluster-wide shell improves the effectiveness of managing cluster nodes by enabling the execution of commands across all nodes.
History explorer allows interactive access to cluster logs. It displays and analyzes actions taken by SUSE Linux Enterprise High Availability Extension.
Cluster test drive allows users to simulate a failover situation before an actual disaster happens, making sure of the configuration and resource allocation prior to production.
Resource agents for open source applications such as Apache, IPv6, DRBD, KVM, Xen and Postgres; resource agents for popular third-party applications such as IBM WebSphere, IBM DB2, VMWare and SAP.
Clustered Samba (CTDB) can be made highly available and scalable using multiple nodes and can transparently failover through cluster-wide locking. CTDB resources are automatically added and synced to Active Directory.
Quorum devices act as arbitration for two-node clusters, providing the ability to make cluster management decisions when a simple decision process does not produce a clear choice. The result of which gives administrators greater control over the applications and data in the cluster.
Continuous data replication across cluster servers in data centers anywhere in the world and minimize data loss due to corruption or failure by protecting your data assets using your existing IT infrastructure.
Distributed Replicated Block Device (DRBD) is a leading open source networked disk management tool, enabling you to build single partitions from multiple disks that mirror each other and make data highly available. You can also quickly restore your clustered services via its fast data resynchronization capabilities. DRBD mirrors the data from the active node of a high availability cluster to its standby node. DRBD supports both synchronous and asynchronous mirroring. In the event of an outage, DRBD automatically resynchronizes the temporarily unavailable node to the latest version of data, without interfering with the service that is running. Additionally, DRBD includes data compression algorithms that reduce replication times.
Node recovery with Relax and Recover enables a quick return to full operational status after a node failure. It allows the administrator to take a full snapshot of the system and restore this snapshot on recovery hardware after a disaster.
Cluster-aware file system, MD RAID, and volume management for optimized performance.
MD RAID 1 is a software based RAID storage solution for a cluster, and provides the redundancy of RAID1 mirroring to the cluster. The performance is almost the same as the native MD RAID.
OCFS2 (Oracle Cluster File System 2) is a shared disk, POSIX-compliant, generic cluster file system that enables the clustering of a wide range of applications for high availability. Using OCFS2, you can now cluster a much wider range of applications for higher availability using cluster-aware POSIX locking. You can also resize your clusters and add new nodes on the fly. Cluster-aware applications will also be able to make use of parallel I/O for higher performance.
GFS2 (Global File System) is a shared disk file system for Linux clusters that allows all nodes to have direct concurrent access to the same shared block storage. GFS2 only supports read. "Write" is not officially supported.
Clustered Logical Volume Manager 2 offers a more convenient, single, cluster-wide view of storage. Clustering extensions to the standard LVM2 toolset allow you to use existing LVM2 commands to safely and simply manage shared storage.
Virtualization aware for managing virtual clusters as well as physical clusters.
KVM and Xen support – the cluster resource manager is able to recognize, monitor and manage services running within virtual servers, as well as services running on physical servers. Virtual servers can be clustered together and services can even be clustered within a virtual server. Moreover, virtual servers can be clustered with physical servers, and physical servers can be clustered with each other, extending high availability from virtual to physical workloads. The ability to encapsulate entire workloads within virtual guests means that you can easily replicate and manage them using the tools and capabilities provided within the solution, such as DRBD, OCFS2 and Cluster LVM2. SUSE Linux Enterprise High Availability Extension's support for virtualized environments gives you unprecedented flexibility to improve services availability as well as resource utilization.
Third-party hypervisor support is included, where you can make services such as VMware, vSphere or Microsoft Hyper-V highly available and manage them as if they were running on a physical server.