JUMP TO
Get URL

IT Infrastructure Management

Mainframe

A mainframe is a high-capacity computer that often serves as the central data repository in an organization’s IT infrastructure. It is linked to users through less powerful devices such as workstations or terminals. Centralizing data in a single mainframe repository makes it easier to manage, update and protect the integrity of the data. Mainframes are generally used for large-scale computer processes that require higher availability and security than smaller-scale machines can provide. For example, the IBM z13 mainframe is capable of processing 2.5 billion transactions per day.

The original mainframes were housed in room-sized metal frames, commonly called “big iron.” In the past, a typical mainframe might have occupied 2,000 to 10,000 square feet. Newer mainframes are the size of a large refrigerator, occupying less than 10 square feet. Some computer manufacturers don’t use the term mainframe, calling any commercial-use computer a server; a “mainframe” is simply the largest type of server. In most cases, mainframe refers to computers that can support thousands of applications and input/output devices simultaneously, serving many thousands of users.

Long past their predicted extinction in 1996, mainframes are the only type of computing hardware that can handle the huge volumes of transactions used in many industries today, including banking, insurance, healthcare, government and retail. According to IBM, 80% of the world’s corporate data is still managed by mainframes. More than a quarter of the mainframe processing capacity that IBM ships is used to run Linux. Mainframes are ideal for server consolidation, where one mainframe can run as many as 100,000 virtual Linux servers. SUSE Linux Enterprise provides support for IBM z System mainframes.

Computer Cluster

A computer cluster is a set of connected computers (nodes) that work together as if they are a single (much more powerful) machine. Unlike grid computers, where each node performs a different task, computer clusters assign the same task to each node. Nodes in a cluster are usually connected to each other through high-speed local area networks. Each node runs its own instance of an operating system. A computer cluster may range from a simple two-node system connecting two personal computers to a supercomputer with a cluster architecture. Computer clusters are often used for cost-effective high performance computing (HPC) and high availability (HA) by businesses of all sizes. If a single component fails in a computer cluster, the other nodes continue to provide uninterrupted processing.

Compared to a single computer, a computer cluster can provide faster processing speed, larger storage capacity, better data integrity, greater reliability and wider availability of resources. Computer clusters are usually dedicated to specific functions, such as load balancing, high availability, high performance or large-scale processing. Compared to a mainframe computer, the amount of power and processing speed produced by a cluster is more cost effective. The networked nodes in a cluster also create an efficient, distributed infrastructure that prevents bottlenecks, thus improving performance.

Computer clustering relies on centralized management software that makes the nodes available as orchestrated servers. The right enterprise operating system can prevent application downtime with clustering that replicates data across multiple computer clusters and provides service failover across any distance with geo clustering. SUSE Linux Enterprise High Availability Extension can protect workloads across globally distributed data centers. It allows companies to deploy both physical and virtual Linux clusters across data centers, ensuring business continuity.

Tailored Data-Center Integration

Tailored data-center-migration Integration (TDI) is a program that allows SAP HANA customers to leverage existing hardware and infrastructure components for their High-Speed Analytical Appliance (HANA) environment. Instead of using an all-in-one appliance with the necessary components pre-configured, TDI enables customers to use certain components that are already in their data centers. Thus, TDI provides a more cost-effective and flexible option for IT organizations deploying SAP HANA. TDI differs from the initial appliance delivery model, a pre-configured hardware setup with pre-installed software packages implemented by a SAP HANA hardware partner. Instead of an appliance, TDI provides several options for the hardware components required to run a SAP HANA environment.

With Tailored data-center-migration Integration, customers can choose their preferred hardware vendors and infrastructure components from a menu of supported SAP HANA hardware. However, all compute server hardware must come from one SAP HANA partner because TDI does not support a mix of hardware from different vendors. Also, specific hardware is required for optimum performance. Only certified storage systems listed in the official SAP HANA Hardware Directory may be used in TDI deployments. The maximum number of worker nodes for SAP HANA scale-out solutions is limited to 16 hosts in TDI environments. Installation of the SAP HANA software must be done by SAP HANA-certified administrators. Before going productive with a SAP HANA system deployed on a SAP HANA TDI infrastructure, SAP recommends conducting a HANA Go-Live Check as offered by SAP Digital Business Services.

Tailored data-center-migration Integration helps enterprises right-size their infrastructures to meet their business needs, protecting their current investments in data center infrastructure, tools and operational processes. TDI can also help organizations standardize IT, achieving lower TCO (total cost of ownership) in the process. SUSE Linux Enterprise Server for SAP Applications includes simplified, automated SAP HANA and SAP S/4HANA installation and management, including TDI deployments.

IT Infrastructure Management

Infrastructure management is the management of both technical and operational components—including hardware, software, policies, processes, data, facilities and equipment—for business effectiveness. It may be divided into systems management, network management and storage management. Enterprises use IT infrastructure management to: reduce duplication of effort; ensure compliance to IT standards and regulations; improve information flow; support flexibility in changing business markets; promote IT interoperability; maintain effective change management; and reduce overall IT costs.

IT infrastructure management helps organizations manage their IT resources in accordance with business needs and priorities. Aligning IT management with business strategy allows technology to create value—rather than drain resources—for the entire organization. Instead of dedicating IT resources to each computing technology and each line of business and managing them separately, IT infrastructure management converges the management of servers, applications, storage, networking, security and IT facilities. Integrated and automated management improves IT efficiency and agility, ultimately affecting business profitability.

IT infrastructure management tools can improve change management and protect the interdependencies in converged IT environments. For example, deploying, updating, patching and configuring multiple servers and systems can be automated with SUSE Manager, a program that manages and monitors Linux servers across physical, virtual and cloud environments. It manages a variety of hardware architectures, hypervisors and cloud platforms. Enterprises may use SUSE Manager to centralize the management of their Linux systems, virtual machines and other software-defined infrastructure (SDI) components. It can provide automated software, asset, patch and configuration management as well as system provisioning, orchestration and monitoring.

Data Center Storage

Data center storage is the collective term for the hardware, software and processes that manage and monitor data storage within a data center, on site. It includes all IT assets that store, retrieve, distribute, back up or archive computer data and applications inside the data center facility. Whereas “IT storage” refers to both on-site and off-site storage assets, “data center storage” refers specifically to on-site assets. These may include hard disk drives, tape drives, direct-attached storage (DAS) devices, storage and backup management software utilities, storage networking technologies such as storage area networks (SAN), network attached storage (NAS) and redundant array of independent disks (RAID) devices.

Data center storage also includes the policies and procedures that govern data storage and retrieval such as data collection and distribution, access control, storage security, data availability, storage quotas, backup schedules, data retention schedules, and so on. In financial, medical and other highly regulated industries, data center storage must comply with government and industry regulations for data storage, information privacy and data security.

Enterprises that need flexible storage provisioning and a secure data storage environment will be best served by a software-defined storage solution (SDS). SUSE Enterprise Storage uses off-the-shelf servers and disk drives and enables the creation of on-site private clouds for data center storage. Private clouds provide storage-as-a-service to business units, and all data is stored on premises behind a firewall. Designed as a distributed storage cluster, SUSE Enterprise Storage provides unlimited scalability. Additional storage capacity can be quickly provisioned and delivered as business needs change, and data placement is automatically rebalanced.

Configuration Management Tools

Configuration management (CM) tools automate the process of identifying, documenting and tracking changes in the hardware, software and devices in an IT environment. CM tools show the system administrator all of the connected systems, their relationships and interdependencies, and the effects of change on system components. Enterprises use CM tools for change-impact analysis in order to reduce system disruption caused by changes to the hardware or software.

Configuration management tools help administrators maintain system consistency, also known as configuration enforcement. This process ensures that new machines, software packages and updates are installed and configured according to the desired state. Consistent system components reduce support incidents, shorten IT problem resolution and help maintain compliance. CM tools also provide version control and change control to keep consistency across various IT sites. Popular open source configuration management tools include Chef, Puppet and Ansible.

Most CM tools support Linux, Windows, Unix and mixed-platform environments. SUSE Manager is a configuration management tool for managing Linux systems on a variety of hardware architectures, hypervisors and cloud platforms. It automatically monitors and tracks configurations and changes in the infrastructure, maintains and demonstrates compliance for Linux workloads, and reduces risk by quickly identifying and remediating systems that are out of compliance. Enterprises may use SUSE Manager to centralize the management of their Linux systems, virtual machines and other software-defined infrastructure (SDI) components. It can provide automated software, asset, patch and configuration management as well as system provisioning, orchestration and monitoring across a variety of hardware architectures, hypervisors and cloud platforms.

Configuration Management

Configuration management (CM) is the practice of making and tracking changes systematically so that an IT environment maintains its integrity over time. The physical attributes and functional capabilities of the IT system’s hardware and software, as well as the interdependencies of all system components, are documented and tracked by configuration management processes. CM monitors version numbers, updates that have been applied to installed software packages, and the physical locations and network addresses of hardware devices.

Configuration management includes evaluating proposed changes, tracking change status, documenting system changes and maintaining support documentation. Unlike asset management, which inventories the assets on hand, configuration management tracks changes throughout the system’s lifecycle. Configuration management is one of the operational processes in the IT Infrastructure Library (ITIL) service management framework.

Software configuration management (SCM) handles changes in software projects. It identifies the functional attributes of the software at various points in time, tracks changes requested against changes made, and monitors changes throughout the software development lifecycle. Software configuration management helps verify that the final delivered software has all of the features that are supposed to be included in the project release. Enterprises use CM and SCM to: ensure compliance; measure system performance; understand the total cost of ownership; see how changes affect key performance indicators; and make data-driven business decisions.

Data center administrators use configuration management tools to manage changes and protect interdependencies in complex IT environments. CM tracking enables consistent and repeatable server deployments. Updating, patching and configuring multiple servers and systems can be automated with CM tools such as SUSE Manager. SUSE Manager can automatically manage and monitor Linux servers across physical, virtual and cloud environments to ensure compliance with internal security policies and external regulations.

Software-Defined Infrastructure

Software-defined infrastructure (SDI) combines software-defined compute (SDC), software-defined networking (SDN) and software defined storage (SDS) into a fully software-defined data center (SDDC). A software-defined data center is an IT facility where infrastructure elements such as networking, storage, processing and security are virtualized and delivered as a service. With SDI, software can control the entire computing infrastructure without human intervention. SDI is hardware independent and programmatically extensible, providing unlimited growth potential for heterogeneous environments.

The SDI model allows many critical IT functions to be fully integrated and automated, such as backups and data recovery. Applications can specify and configure the hardware they need to run on as part of their code. Thus, SDI automatically handles application requirements, data security and disaster-preparedness functions. Software-defined infrastructure is open source, allowing IT resources to be flexibly configured per-application on commoditized hardware. This improves data center agility and efficiency while decreasing hardware costs. SDI supports configuration rollback and cloning by versioning the data center landscape. Management dashboards can be used to provision and monitor the software-defined infrastructure. SDI is capable of placing workloads in private or public clouds.

SUSE Manager can help businesses utilizing SDI solutions to reduce costs and drive innovation by supporting new business processes such as DevOps. Built for Linux, Manager is an SDI management solution that centralizes the management of Linux systems, virtual machines and containers. It provides automated software, asset, patch and configuration management as well as system provisioning, orchestration and monitoring across a variety of hardware architectures, hypervisors and cloud platforms.

RELATED TOPICS

HPC

High Performance Computing (HPC) is the IT practice of aggregating computing power to deliver more performance than a typical computer can provide. Originally used to solve complex scientific and engi...

Learn more