Cephadm Best Practices for Deploy Ceph Octopus V15.2

Merouane Agar
4 min readJan 27, 2021

Cephadm is new in the Octopus v15.2.0 release and does not support older versions of Ceph

Cephadm manages the full lifecycle of a Ceph cluster. It starts by bootstrapping a tiny Ceph cluster on a single node and then uses the orchestration interface to expand the cluster to include all hosts and to provision all Ceph daemons and services. This can be performed via the Ceph command-line interface (CLI) or dashboard (GUI).

Ceph Octopus architecture

1. Prerequisites

  • Install python3, podman, chrony packages on all hosts
[root@ceph1 ~]# yum install -y python3 podman chrony lvm2
[root@ceph2 ~]# yum install -y python3 podman chrony lvm2
[root@ceph3 ~]# yum install -y python3 podman chrony lvm2
  • Start and enable chrony on all hosts
[root@ceph1 ~]# systemctl enable --now chronyd.service
  • Create user with passwordless sudo access on all hosts
[root@ceph1 ~]# useraddd manager
[root@ceph1 ~]# echo "admin01" | passwd manager --stdin
[root@ceph1 ~]# chage -I -1 -m 0 -M 99999 -E -1 manager
[root@ceph1 ~]# echo "manager ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-manager-users

2. Install cephadm (only in the first host)

  • Download cephadm
[root@ceph1 ~]# wget https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm && chmod +x cephadm
  • Install ceph octopus repo and cephadm on the host
[root@ceph1 ~]# ./cephadm add-repo --release octopus
[root@ceph1 ~]# ./cephadm install

3. Bootstrap a ceph cluster

[root@ceph1 ~]# cephadm bootstrap --mon-ip "$(hostname -i)" --ssh-user manager------------------------------------------------------
Enabling firewalld port 8443/tcp in current zone...
Ceph Dashboard is now available at:

URL: https://ceph1:8443/
User: admin
Password: 5heh8n7m4y

You can access the Ceph CLI with:

sudo /usr/sbin/cephadm shell --fsid 6926a56e-5aab-11eb-9580-164dab17ec3b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/
  • Install ceph client in host
[root@ceph1 ~]# yum install -y ceph-common

4. Add hosts to the cluster

[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub manager@ceph2
[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub manager@ceph3
[root@ceph1 ~]# ceph orch host add ceph2
[root@ceph1 ~]# ceph orch host add ceph3
  • Apply monitor to your 3 hosts
[root@ceph1 ~]# ceph orch apply mon "ceph1,ceph2,ceph3"[root@ceph1 ~]# ceph orch ls
NAME RUNNING REFRESHED AGE PLACEMENT IMAGE ID
alertmanager 1/1 4m ago 2h count:1
crash 3/3 5m ago 2h *
grafana 1/1 4m ago 2h count:1
mgr 2/2 5m ago 2h count:2
mon 3/3 5m ago 3m ceph1;ceph2;ceph3
node-exporter 3/3 5m ago 2h *
prometheus 1/1 4m ago 2h count:1

5. Deploy osds

  • Display the inventory of storage devices available on all cluster hosts
[root@ceph1 ~]# ceph orch device ls
Hostname Path Type Serial Size Health Ident Fault vailable
ceph1 /dev/sdb hdd drive-scsi2 107G Unknown N/A N/A Yes
ceph1 /dev/sdc hdd drive-scsi3 107G Unknown N/A N/A Yes
ceph1 /dev/sdd hdd drive-scsi4 107G Unknown N/A N/A Yes
ceph2 /dev/sdb hdd drive-scsi4 107G Unknown N/A N/A Yes
ceph2 /dev/sdc hdd drive-scsi3 107G Unknown N/A N/A Yes
ceph2 /dev/sdd hdd drive-scsi2 107G Unknown N/A N/A Yes
ceph3 /dev/sdb hdd drive-scsi4 107G Unknown N/A N/A Yes
ceph3 /dev/sdc hdd drive-scsi3 107G Unknown N/A N/A Yes
ceph3 /dev/sdd hdd drive-scsi2 107G Unknown N/A N/A Yes
  • Tell Ceph to consume any available and unused storage device
[root@ceph1 ~]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
  • Check the deployment
[root@ceph1 ~]# ceph orch ls 
NAME RUNNING REFRESHED AGE PLACEMENT
alertmanager 1/1 4m ago 2h count:1
crash 3/3 4m ago 2h *
grafana 1/1 4m ago 2h count:1
mgr 2/2 4m ago 2h count:2
mon 3/3 4m ago 26m ceph1;ceph2;ceph3
node-exporter 3/3 4m ago 2h *
osd.all-available-devices 9/9 4m ago 5m *
prometheus 1/1 4m ago 2h count:1
  • Check ceph status
[root@ceph1 ~]# ceph -s 
cluster:
id: 78c597f2-6080-11eb-b0a3-8ee8efbd56e1
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 23m)
mgr: ceph1.koarbq(active, since 2h), standbys: ceph2.dqggxs
osd: 9 osds: 9 up (since 2m), 9 in (since 2m)

data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 9.0 GiB used, 891 GiB / 900 GiB avail
pgs: 1 active+clean

6. Dashboard

Conclusion

Cephadm making Ceph very easy to install for a new user, and making a Ceph cluster easy to maintain over time through seamless integration with the Ceph CLI and GUI. A new orchestrator API was first introduced in Ceph Nautilus to provide a generic way for Ceph–the CLI and the dashboard–to interact with its deployment environment, whether it’s Rook or ceph-ansible or DeepSea, but only with Octopus has this reached a level of maturity where it is providing meaningful abstraction across multiple backends: Rook for Kubernetes environments, and Cephadm for everyone else.

For more information visit the official page : cephadm

Thanks You🙏🙏🙏

--

--