> ceph-deploy osd create -bluestore -dmcrypt -data /dev/sdd -block-db osvg/sdd-db $ The command that failed and did not quite clean up after itself was: > Now I have two left over LVM dmcrypded volumes that I am not sure how clean up. The reason for failing is unimportant at this point, I believe it was race condition, as I was running ceph-deploy inside while loop for all disks in this server. One of the OSD daemons failed to deploy with ceph-deploy. > I have a server with 18 disks, and 17 OSD daemons configured. > On Thu, at 10:55 AM Sergei Genchev wrote: > On Thu, at 10:10 AM Alfredo Deza wrote: Do you have output on how it failed before? In this case, you removed the LV so the wipefs failed because that LV > -> RuntimeError: command returned non-zero exit status: 1 > stderr: wipefs: error: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz: probing initialization failed: No such file or directory > Running command: /usr/sbin/wipefs -all osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz > -> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz > Running command: /usr/sbin/cryptsetup status /dev/mapper/ > # ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz > cryptsetup remove /dev/mapper/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz > cryptsetup remove /dev/mapper/AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr > I ended up manually removing LUKS volumes and then deleting LVM LV, VG, and PV If you do not want to keep them around you would need to use -destroyĬeph-volume lvm zap -destroy osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz > I tried using ceph-volume to zap these stores, but none of the command worked, including yours 'ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz' > I did not have any reasons to keep volumes around.
0 Comments
Leave a Reply. |