4.1 Introduction to DeepSea

The goal of DeepSea is to save the administrator time and confidently perform complex operations on a Ceph cluster. This idea has driven a few choices. Before presenting those choices, some observations are necessary.

All software has configuration. Sometimes the default is sufficient. This is not the case with Ceph. Ceph is flexible almost to a fault. Reducing this complexity would force administrators into preconceived configurations. Several of the existing Ceph solutions for an installation create a demonstration cluster of three nodes. However, the most interesting features of Ceph require more.

One aspect of configuration management tools is accessing the data such as addresses and device names of the individual servers. For a distributed storage system such as Ceph, that aggregate can run into the hundreds. Collecting the information and entering the data manually into a configuration management tool is prohibitive and error prone.

The steps necessary to provision the servers, collect the configuration, configure and deploy Ceph are mostly the same. However, this does not address managing the separate functions. For day to day operations, the ability to trivially add hardware to a given function and remove it gracefully is a requirement.

With these observations in mind, DeepSea addresses them with the following strategy. DeepSea Consolidates the administrators decisions in a single location. The decisions revolve around cluster assignment, role assignment and profile assignment. And DeepSea collects each set of tasks into a simple goal. Each goal is a Stage:

  • Stage 0—the provisioning— this stage is optional as many sites provides their own provisioning of servers. If you do not have your provisioning tool, you should run this stage. During this stage all required updates are applied and your system may be rebooted.

  • Stage 1—the discovery— here you detect all hardware in your cluster and collect necessary information for the Ceph configuration. For details about configuration refer to Section 4.3, Configuration and Customization.

  • Stage 2—the configuration— you need to prepare configuration data in a particular format.

  • Stage 3—the deployment— creates a basic Ceph cluster with OSD and monitors.

  • Stage 4—the services— additional features of Ceph like iSCSI, RadosGW and CephFS can be installed in this stage. Each is optional.

  • Stage 5—the removal stage. This stage is not mandatory and during the initial setup it is usually not needed. In this stage the roles of minions and also the cluster configuration are removed. Run this stage, when you need to remove a storage node from your cluster, for details refer to Section 27.9.3, Removing and Reinstalling Salt Cluster Nodes.

4.1.1 Organization and Important Locations

Salt has several standard locations and several naming conventions used on your master node:

/srv/pillar

The directory stores configuration data for your cluster minions. Pillar is an interface for providing global configuration values to all your cluster minions.

/srv/salt/

The directory stores Salt state files (also called sls files). State files are formatted description of states in which the cluster should be. For details refer to the Salt documentation.

/srv/module/runners

The directory stores python scripts known as runners. Runners are executed on the master node.

/srv/salt/_modules

The directory stores python scripts that are called modules. The modules are applied to all minions in your cluster.

/srv/pillar/ceph

The directory is used by DeepSea. Collected configuration data are stored there.

/srv/salt/ceph

Directory used by DeepSea. The directory stores sls files that can be in different format, but each subdirectory contains sls files. Each subdirectory contains only one type of sls file. For example, /srv/salt/ceph/stage contains orchestration files that are executed by the salt-run state.orchestrate.