Deploying a ceph cluster in single host

Merouane Agar
7 min readJan 21, 2021

Cephadm manages the full lifecycle of a Ceph cluster. It starts by bootstrapping a tiny Ceph cluster on a single node and then uses the orchestration interface to expand the cluster to include all hosts and to provision all Ceph daemons and services. This can be performed via the Ceph command-line interface (CLI) or dashboard (GUI).

Cephadm is new in the Octopus v15.2.0 release and does not support older versions of Ceph.

We will deploying a ceph cluster with 3 disks on a Centos 8 server using the cephadm tool

[root@ceph-centos8 ~]# lsblk 
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─cs_ceph--centos8-root 253:0 0 17G 0 lvm /
└─cs_ceph--centos8-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 32G 0 disk
sdc 8:32 0 32G 0 disk
sdd 8:48 0 32G 0 disk
sr0 11:0 1 9G 0 rom

1- Install cephadm

  • To use cephadm you must first install python3, podman, chrony
[root@ceph-centos8 ~]# yum install -y python3 podman chrony lvm2
  • Download cephadm
[root@ceph-centos8 ~]# wget https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm && chmod +x cephadm

Although the standalone script is sufficient to get a cluster started, it is convenient to have the cephadm command installed on the host. To install the packages that provide the cephadm command for the Octopus release, run the following commands

  • Install repo ceph
[root@ceph-centos8 ~]# ./cephadm add-repo --release octopus
Writing repo to /etc/yum.repos.d/ceph.repo...
Enabling EPEL...
  • Install the cephadm command installed on the host
[root@ceph-centos8 ~]# ./cephadm install
Installing packages ['cephadm']...

2 - Bootstrap a ceph cluster

[root@ceph-centos8 ~]# mkdir /etc/ceph 
[root@ceph-centos8 ~]# cephadm bootstrap --mon-ip "$(hostname -i)"
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman|docker (/usr/bin/podman) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 6926a56e-5aab-11eb-9580-164dab17ec3b
Verifying IP 10.9.0.14 port 3300 ...
Verifying IP 10.9.0.14 port 6789 ...
Mon IP 10.9.0.14 is in CIDR network 10.9.0.0/24
Pulling container image docker.io/ceph/ceph:v15...
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
firewalld ready
Enabling firewalld service ceph-mon in current zone...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network...
Creating mgr...
Verifying port 9283 ...
firewalld ready
Enabling firewalld service ceph in current zone...
firewalld ready
Enabling firewalld port 9283/tcp in current zone...
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Wrote config to /etc/ceph/ceph.conf
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/10)...
mgr not available, waiting (2/10)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 5...
Mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to to /etc/ceph/ceph.pub
Adding key to root@localhost's authorized_keys...
Adding host ceph-centos8...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Enabling mgr prometheus module...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 13...
Mgr epoch 13 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
firewalld ready
Enabling firewalld port 8443/tcp in current zone...
Ceph Dashboard is now available at:

URL: https://ceph-centos8:8443/
User: admin
Password: im623nomjw

You can access the Ceph CLI with:

sudo /usr/sbin/cephadm shell --fsid 6926a56e-5aab-11eb-9580-164dab17ec3b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/
  • Inspection

The cephadm utility downloaded several images from DockerHub and started several containers for each component with systemectl

All available storage device are also used as osd

[root@ceph-centos8 yum.repos.d]# podman ps -a --format "{{.Names}}"  
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-osd.2
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-osd.1
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-osd.0
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-grafana.ceph-centos8
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-alertmanager.ceph-centos8
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-prometheus.ceph-centos8
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-node-exporter.ceph-centos8
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-crash.ceph-centos8
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-mgr.ceph-centos8.ndcbss
ceph-6926a56e-5aab-11eb-9580-164dab17ec3b-mon.ceph-centos8
[root@ceph-centos8 yum.repos.d]# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/ceph/ceph v15 5553b0cb212c 5 weeks ago 965 MB
docker.io/ceph/ceph-grafana 6.6.2 a0dce381714a 7 months ago 519 MB
docker.io/prom/prometheus v2.18.1 de242295e225 8 months ago 141 MB
docker.io/prom/alertmanager v0.20.0 0881eb8f169f 13 months ago 53.5 MB
docker.io/prom/node-exporter v0.18.1 e5a616e4b9cf 19 months ago 24.3 MB
[root@ceph-centos8 yum.repos.d]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─cs_ceph--centos8-root 253:0 0 17G 0 lvm /
└─cs_ceph--centos8-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 32G 0 disk
└─ceph--62cc4b8e--7f73--45f6--be3b--9560c2445df6-osd--block--05d5bd07--faea--4c90--ad99--44733c3c9fc1 253:2 0 32G 0 lvm
sdc 8:32 0 32G 0 disk
└─ceph--80fb0d3e--bd6d--4c04--a0b9--3175e4ccbeb6-osd--block--24f9553d--5e65--4e96--8cfc--c5f97224091c 253:3 0 32G 0 lvm
sdd 8:48 0 32G 0 disk
└─ceph--9347524a--2857--43e2--943f--ad69e38fc4e4-osd--block--131a05c0--c160--4232--81c4--abe5dce82a23 253:4 0 32G 0 lvm
sr0 11:0 1 9G 0 rom
[root@ceph-centos8 yum.repos.d]# systemctl status ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@ <tabs-tabs>
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@alertmanager.ceph-centos8.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@node-exporter.ceph-centos8.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@crash.ceph-centos8.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@osd.0.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@grafana.ceph-centos8.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@osd.1.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@mgr.ceph-centos8.ndcbss.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@osd.2.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@mon.ceph-centos8.service
ceph-6926a56e-5aab-11eb-9580–164dab17ec3b@prometheus.ceph-centos8.service

3 - Use ceph

  • With cephadm shell
[root@ceph-centos8 ~]# cephadm shell
Inferring fsid 6926a56e-5aab-11eb-9580-164dab17ec3b
Inferring config /var/lib/ceph/6926a56e-5aab-11eb-9580-164dab17ec3b/mon.ceph-centos8/config
Using recent ceph image docker.io/ceph/ceph:v15
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
[ceph: root@ceph-centos8 /]# ceph -s
cluster:
id: 6926a56e-5aab-11eb-9580-164dab17ec3b
health: HEALTH_WARN
Degraded data redundancy: 1 pg undersized

services:
mon: 1 daemons, quorum ceph-centos8 (age 36h)
mgr: ceph-centos8.ndcbss(active, since 36h)
osd: 3 osds: 3 up (since 26h), 3 in (since 26h); 1 remapped pgs

data:
pools: 1 pools, 1 pgs
objects: 4 objects, 0 B
usage: 3.0 GiB used, 93 GiB / 96 GiB avail
pgs: 8/12 objects misplaced (66.667%)
1 active+undersized+remapped

progress:
Rebalancing after osd.1 marked in (2h)
[............................]
  • You can also use ceph from your server, but you have to install the ceph-common package
[root@ceph-centos8 ~]# yum install ceph-common
[root@ceph-centos8 ~]# ceph -s
cluster:
id: 6926a56e-5aab-11eb-9580-164dab17ec3b
health: HEALTH_WARN
Degraded data redundancy: 1 pg undersized

services:
mon: 1 daemons, quorum ceph-centos8 (age 36h)
mgr: ceph-centos8.ndcbss(active, since 36h)
osd: 3 osds: 3 up (since 26h), 3 in (since 26h); 1 remapped pgs

data:
pools: 1 pools, 1 pgs
objects: 4 objects, 0 B
usage: 3.0 GiB used, 93 GiB / 96 GiB avail
pgs: 8/12 objects misplaced (66.667%)
1 active+undersized+remapped

progress:
Rebalancing after osd.1 marked in (2h)
[............................]

4 - Create storage pool

[root@ceph-centos8 ~]# ceph osd pool create my-pool 128
[root@ceph-centos8 ~]# rbd pool init my-pool

5 - Add hosts to the cluster

[root@ceph-centos8 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-host1
[root@ceph-centos8 ~]# ceph orch host add ceph-host1

6 - Test other services

Conclusion

Cephadm making Ceph very easy to install for a new user, and making a Ceph cluster easy to maintain over time through seamless integration with the Ceph CLI and GUI. A new orchestrator API was first introduced in Ceph Nautilus to provide a generic way for Ceph–the CLI and the dashboard–to interact with its deployment environment, whether it’s Rook or ceph-ansible or DeepSea, but only with Octopus has this reached a level of maturity where it is providing meaningful abstraction across multiple backends: Rook for Kubernetes environments, and Cephadm for everyone else.

For more information visit the official page : cephadm

Thanks You🙏🙏🙏

--

--