A.0 Example Procedure of Manual Ceph Installation

The following procedure shows the commands that you need to install Ceph storage cluster manually.

  1. Generate the key secrets for the Ceph services you plan to run. You can use the following command to generate it:

    python -c "import os ; import struct ; import time; import base64 ; \
     key = os.urandom(16) ; header = struct.pack('<hiih',1,int(time.time()),0,len(key)) ; \
     print base64.b64encode(header + key)"
  2. Add the keys to the related keyrings. First for client.admin, then for monitors, and then other related services, such as OSD, RADOS Gateway, or MDS:

    ceph-authtool -n client.admin \
     --create-keyring /etc/ceph/ceph.client.admin.keyring \
     --cap mds 'allow *' --cap mon 'allow *' --cap osd 'allow *'
    ceph-authtool -n mon. \
     --create-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \
     --set-uid=0 --cap mon 'allow *'
    ceph-authtool -n client.bootstrap-osd \
     --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \
     --cap mon 'allow profile bootstrap-osd'
    ceph-authtool -n client.bootstrap-rgw \
     --create-keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring \
     --cap mon 'allow profile bootstrap-rgw'
    ceph-authtool -n client.bootstrap-mds \
     --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring \
     --cap mon 'allow profile bootstrap-mds'
  3. Create a monmap—a database of all monitors in a cluster:

    monmaptool --create --fsid eaac9695-4265-4ca8-ac2a-f3a479c559b1 \
     /tmp/tmpuuhxm3/monmap
    monmaptool --add osceph-02 192.168.43.60 /tmp/tmpuuhxm3/monmap
    monmaptool --add osceph-03 192.168.43.96 /tmp/tmpuuhxm3/monmap
    monmaptool --add osceph-04 192.168.43.80 /tmp/tmpuuhxm3/monmap
  4. Create a new keyring and import keys from the admin and monitors' keyrings there. Then use them to start the monitors:

    ceph-authtool --create-keyring /tmp/tmpuuhxm3/keyring \
     --import-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring
    ceph-authtool /tmp/tmpuuhxm3/keyring \
     --import-keyring /etc/ceph/ceph.client.admin.keyring
    sudo -u ceph ceph-mon --mkfs -i osceph-03 \
     --monmap /tmp/tmpuuhxm3/monmap --keyring /tmp/tmpuuhxm3/keyring
    systemctl restart ceph-mon@osceph-03
  5. Check the monitors state in systemd:

    systemctl show --property ActiveState ceph-mon@osceph-03
  6. Check if Ceph is running and reports the monitor status:

    ceph --cluster=ceph \
     --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_status
  7. Check the specific services' status using the existing keys:

    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin -f json-pretty status
    [...]
    ceph --connect-timeout 5 \
     --keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \
     --name mon. -f json-pretty status
  8. Import keyring from existing Ceph services and check the status:

    ceph auth import -i /var/lib/ceph/bootstrap-osd/ceph.keyring
    ceph auth import -i /var/lib/ceph/bootstrap-rgw/ceph.keyring
    ceph auth import -i /var/lib/ceph/bootstrap-mds/ceph.keyring
    ceph --cluster=ceph \
     --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_status
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin -f json-pretty status
  9. Prepare disks/partitions for OSDs, using the XFS file system:

    ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \
     --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdb
    ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \
     --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdc
    [...]
  10. Activate the partitions:

    ceph-disk -v activate --mark-init systemd --mount /dev/vdb1
    ceph-disk -v activate --mark-init systemd --mount /dev/vdc1
  11. For SUSE Enterprise Storage version 2.1 and earlier, create the default pools:

    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .users.swift 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .intent-log 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .rgw.gc 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .users.uid 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .rgw.control 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .users 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .usage 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .log 16 16
    ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \
     --name client.admin osd pool create .rgw 16 16
  12. Create the RADOS Gateway instance key from the bootstrap key:

    ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-rgw \
     --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create \
     client.rgw.0dc1e13033d2467eace46270f0048b39 osd 'allow rwx' mon 'allow rw' \
     -o /var/lib/ceph/radosgw/ceph-rgw.rgw_name/keyring
      
  13. Enable and start RADOS Gateway:

    systemctl enable ceph-radosgw@rgw.rgw_name
    systemctl start ceph-radosgw@rgw.rgw_name
  14. Optionally, create the MDS instance key from the bootstrap key, then enable and start it:

    ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-mds \
     --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create \
     mds.mds.rgw_name osd 'allow rwx' mds allow mon \
     'allow profile mds' \
     -o /var/lib/ceph/mds/ceph-mds.rgw_name/keyring
    systemctl enable ceph-mds@mds.rgw_name
    systemctl start ceph-mds@mds.rgw_name