[DRBD-user] FW: Getting snapshotting working with VDO: a recent experience.

Eric Robinson eric.robinson at psmnv.com
Tue Jul 6 23:51:11 CEST 2021


For anyone else who is considering using VDO over DRBD. This is what we discovered about getting it to work.

From: Sweet Tea Dorminy <sweettea at redhat.com>
Sent: Tuesday, July 6, 2021 1:50 PM
To: vdo-devel <vdo-devel at redhat.com>
Subject: Getting snapshotting working with VDO: a recent experience.

Recently I had the pleasure of helping someone figure out how to use VDO with their snapshotting solution, and figured I'd send out a summary, with their permission, to the list in case it helps anyone else.
Setup: Eric was using DRBD for data replication, and VDO on top to provide data reduction, similarly to Linbit's article<https://linbit.com/blog/albireo-virtual-data-optimizer-vdo-on-drbd/>. The stack looked like this:
Physical -> LVM -> DRBD -> VDO -> FileSystem
Specifically:
/dev/sda
                /dev/vg/lv
                                /dev/drbd0
                                                /dev/vdo0
                                                                /my-filesystem (xfs)

Problem: Eric wanted to add snapshotting to the mix, taking a snapshot and verifying that it matched the filesystem. Without VDO in the stack, this is a simple matter of taking a snapshot of /dev/vg/lv and mounting the snapshot. With VDO, though, how to get from /dev/vg/lv-snap to a verifiable filesystem?
- 'vdo create --device=/dev/vg/lv-snap' doesn't work -- it formats a new VDO, and complains if it finds an already existing vdo.
- 'vdo import' doesn't work -- it complains there's already a VDO with the same UUID on the system.
- just mounting /dev/vg/lv-snap doesn't work, as it is a vdo, not a filesystem.

Solving: In order to start a VDO from storage containing an already formatted VDO, 'vdo import' is necessary, but as said, it reported a UUID collision. After some searching, we realized passing "--uuid=''" (the empty string) to vdo import would import the snapshot VDO and also change the UUID, so it wouldn't collide with the original VDO.

For instance:
[sweettea at localhost ~]$ sudo vdo create --name vdo0 --device /dev/fedora_localhost-live/vms
Creating VDO vdo0
      Logical blocks defaulted to 6802574 blocks.
      The VDO volume can address 26 GB in 13 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO vdo0
Starting compression on VDO vdo0
VDO instance 0 volume is ready at /dev/mapper/vdo0
[sweettea at localhost ~]$ sudo lvcreate --size 2G --snapshot --name vms_snap /dev/fedora_localhost-live/vms
  Logical volume "vms_snap" created.
[sweettea at localhost ~]$ sudo vdo import --name vdo0-snap --device /dev/fedora_localhost-live/vms_snap
Importing VDO vdo0-snap
vdo: ERROR - UUID df61bc46-13dc-4091-bdec-4896c744888c already exists in VDO volume(s) stored on /dev/disk/by-id/dm-uuid-LVM-Gexh1cit2vwmcvIf2AAvullg3mWvrnql0SluovtpLXaYGoyPw84zRD4NPmGO3Nu4
[sweettea at localhost ~]$ sudo vdo import --name vdo0-snap --device /dev/fedora_localhost-live/vms_snap --uuid
usage: vdo import [-h] -n <volume> --device <devicepath> [--activate {disabled,enabled}] [--blockMapCacheSize <megabytes>] [--blockMapPeriod <period>] [--compression {disabled,enabled}]
                  [--deduplication {disabled,enabled}] [--emulate512 {disabled,enabled}] [--maxDiscardSize <megabytes>] [--uuid <uuid>] [--vdoAckThreads <threadCount>]
                  [--vdoBioRotationInterval <ioCount>] [--vdoBioThreads <threadCount>] [--vdoCpuThreads <threadCount>] [--vdoHashZoneThreads <threadCount>]
                  [--vdoLogicalThreads <threadCount>] [--vdoLogLevel {critical,error,warning,notice,info,debug}] [--vdoPhysicalThreads <threadCount>]
                  [--writePolicy {async,async-unsafe,sync,auto}] [-f <file>] [--logfile <pathname>] [--verbose]
vdo import: error: argument --uuid: expected one argument
[sweettea at localhost ~]$ sudo vdo import --name vdo0-snap --device /dev/fedora_localhost-live/vms_snap --uuid ''
Importing VDO vdo0-snap
Starting VDO vdo0-snap
Starting compression on VDO vdo0-snap
VDO instance 1 volume is ready at /dev/mapper/vdo0-snap

(The documentation on --uuid was somewhat confusing, but passing an empty string worked.)
However, mounting the resulting snapshot still didn't work.
'journalctl -K' showed these logs from a different attempt:
[3714073.665039] kvdo3:dmsetup: underlying device, REQ_FLUSH: supported, REQ_FUA: not supported
[3714073.665041] kvdo3:dmsetup: Using write policy async automatically.
[3714073.665042] kvdo3:dmsetup: loading device 'snap_vdo0'
[3714073.665055] kvdo3:dmsetup: zones: 1 logical, 1 physical, 1 hash; base threads: 5
[3714073.724597] kvdo3:dmsetup: starting device 'snap_vdo0'
[3714073.724607] kvdo3:journalQ: Device was dirty, rebuilding reference counts
[3714074.025715] kvdo3:journalQ: Finished reading recovery journal
[3714074.032016] kvdo3:journalQ: Highest-numbered recovery journal block has sequence number 4989105, and the highest-numbered usable block is 4989105
[3714074.337654] kvdo3:physQ0: Replaying entries into slab journals for zone 0
[3714074.536446] kvdo3:physQ0: Recreating missing journal entries for zone 0
[3714074.536503] kvdo3:journalQ: Replayed 6365157 journal entries into slab journals
[3714074.536504] kvdo3:journalQ: Synthesized 0 missing journal entries
[3714074.538423] kvdo3:journalQ: Saving recovery progress
[3714074.803255] kvdo3:logQ0: Replaying 2545622 recovery entries into block map
[3714077.211236] kvdo3:journalQ: Flushing block map changes
[3714077.231360] kvdo3:journalQ: Rebuild complete.
[3714077.296258] kvdo3:journalQ: Entering recovery mode
[3714077.296460] kvdo3:dmsetup: Setting UDS index target state to online
[3714077.296473] kvdo3:dmsetup: device 'snap_vdo0' started
[3714077.296474] kvdo3:dmsetup: resuming device 'snap_vdo0'
[3714077.296476] kvdo3:dmsetup: device 'snap_vdo0' resumed
[3714077.296493] uds: kvdo3:dedupeQ: loading or rebuilding index: dev=/dev/disk/by-id/dm-uuid-LVM-CKGi2ErfgK2rWEYpTJg0OhkXrkf7bRCAIzSDMIJdu5kf0SZFdqgtOySFCarMb1Cl offset=4096 size=2781704192
[3714077.339599] uds: kvdo3:dedupeQ: Using 6 indexing zones for concurrency.
[3714077.448849] kvdo3:packerQ: compression is enabled
[3714078.142975] uds: kvdo3:dedupeQ: loaded index from chapter 0 through chapter 85
[3714078.227367] uds: kvdo3:dedupeQ: Replaying volume from chapter 3929 through chapter 4952
[3714078.233636] uds: kvdo3:dedupeQ: unexpected index page map update, jumping from 85 to 3929
[3714082.392773] kvdo3:journalQ: Exiting recovery mode
[3714134.313962] XFS (dm-8): Filesystem has duplicate UUID be2e762b-6dea-48c4-83ad-08ece1cac43b - can't mount
[3714156.566380] uds: kvdo3:dedupeQ: replay changed index page map update from 85 to 4951
[3714320.750730] kvdo3:dmsetup: suspending device 'snap_vdo0'
[3714320.751156] uds: dmsetup: beginning save (vcn 4951)
[3714321.135314] uds: dmsetup: finished save (vcn 4951)
[3714321.135322] kvdo3:dmsetup: device 'snap_vdo0' suspended
[3714321.135360] kvdo3:dmsetup: stopping device 'snap_vdo0'
[3714321.201264] kvdo3:dmsetup: device 'snap_vdo0' stopped

Based on the red line, the remaining problem was a XFS uuid collision, much like the VDO uuid collision on the first attempt to vdo import. Searching revealed this link<https://www.miljan.org/main/2009/11/16/lvm-snapshots-and-xfs/> which suggested either xfs_admin -U generate or mount -o nouuid to change or ignore the uuid of the XFS filesystem. Using these, Eric was able to use the setup fully: taking a snapshot, starting VDO on the snapshot, and mounting the filesystem stored thereon.

Hope this helps someone else in the future!
Sweet Tea


Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20210706/27d71a6b/attachment-0001.htm>


More information about the drbd-user mailing list