Cephadm: Upgrade Ceph minor version 15.2.8 to 15.2.9

Merouane Agar
3 min readFeb 25, 2021

Procedure for upgrading a ceph cluster with cephadm

My Cluster

Upgrade

  • Check the current version
[root@ceph1 ~]# ceph versions
{
"mon": {
"ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)": 3
},
"mgr": {
"ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)": 2
},
"osd": {
"ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)": 9
},
"mds": {},
"tcmu-runner": {
"ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)": 1
},
"overall": {
"ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)": 15
}
}
  • Before you start, you should verify that all hosts are currently online and your cluster is healthy.
[root@ceph1 ~]# ceph health
HEALTH_OK
  • To upgrade (or downgrade) to a specific release:

ceph orch upgrade start — ceph-version <version>

[root@ceph1 ~]# ceph orch upgrade start --ceph-version 15.2.9
Initiating upgrade to docker.io/ceph/ceph:v15.2.9
  • You can follow the progress of the upgrade in cephadm log
[root@ceph1 ~]# ceph -W cephadm
cluster:
id: 78c597f2-6080-11eb-b0a3-8ee8efbd56e1
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 58m)
mgr: ceph1.koarbq(active, since 58m), standbys: ceph2.dqggxs
osd: 9 osds: 9 up (since 58m), 9 in (since 4w)
tcmu-runner: 1 daemon active (ceph2:myrbd/VMWARE_DISK1)

task status:

data:
pools: 2 pools, 33 pgs
objects: 40 objects, 57 MiB
usage: 9.4 GiB used, 891 GiB / 900 GiB avail
pgs: 33 active+clean

io:
client: 853 B/s rd, 0 op/s rd, 0 op/s wr

progress:
Upgrade to docker.io/ceph/ceph:v15.2.9 (2m)
[====........................] (remaining: 10m)
2021-02-25T08:39:32.941691+0100 mgr.ceph1.koarbq [INF] Found osd claims -> {}
2021-02-25T08:39:32.941838+0100 mgr.ceph1.koarbq [INF] Found osd claims for drivegroup all-available-devices -> {}
2021-02-25T08:39:32.945084+0100 mgr.ceph1.koarbq [INF] Applying all-available-devices on host ceph1...
2021-02-25T08:39:32.945730+0100 mgr.ceph1.koarbq [INF] Applying all-available-devices on host ceph2...
2021-02-25T08:39:32.946341+0100 mgr.ceph1.koarbq [INF] Applying all-available-devices on host ceph3...
2021-02-25T08:39:40.314740+0100 mgr.ceph1.koarbq [INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.9 with id dfc48307963697ff48acd9dd6fda4a7a24017b9d8124f86c2a542b0802fe77ba
2021-02-25T08:39:40.318526+0100 mgr.ceph1.koarbq [INF] Upgrade: Checking mgr daemons...
2021-02-25T08:39:40.318715+0100 mgr.ceph1.koarbq [INF] Upgrade: Need to upgrade myself (mgr.ceph1.koarbq)
2021-02-25T08:39:40.323823+0100 mgr.ceph1.koarbq [INF] Failing over to other MGR
2021-02-25T08:39:46.451602+0100 mgr.ceph2.dqggxs [INF] refreshing ceph3 facts
....
....
2021-02-25T08:47:21.358360+0100 mgr.ceph2.dqggxs [INF] Upgrade: All rbd-mirror daemons are up to date.
2021-02-25T08:47:21.358470+0100 mgr.ceph2.dqggxs [INF] Upgrade: Finalizing container_image settings
2021-02-25T08:47:21.513877+0100 mgr.ceph2.dqggxs [INF] Upgrade: Complete!
  • Check the current version again
[root@ceph1 ~]# ceph  versions
{
"mon": {
"ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable)": 3
},
"mgr": {
"ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable)": 2
},
"osd": {
"ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable)": 9
},
"mds": {},
"tcmu-runner": {
"ceph version 15.2.9 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)": 1
},
"overall": {
"ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable)": 15
}
}

Downgrade

It is also possible to do a downgrade by proceeding in the same way

[root@ceph1 ~]# ceph orch upgrade start --ceph-version 15.2.8

Conclusion

Cephadm is capable of safely upgrading Ceph from one release to ano, The upgrade order starts with managers, monitors, then other daemons, Each daemon is restarted only after Ceph indicates that the cluster will remain available.
However the Ceph cluster health status is likely to switch to HEALTH_WARNING during the upgrade, But it remains highly available

--

--