Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 115 additions & 0 deletions docs/design/virt-qcow2-datamover.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# Virt incremental qcow2 datamover

In a kubevirt environment, one possibility for improving volume backup performance to allow more frequent backups would be to take incremental qcow2 backups using libvirt tools. Integrating this into OADP/Velero backups will require some new components to OADP (plugins, controller), as well as some modifications to the velero codebase.

## Background

Taking a VolumeSnapshot and then using kopia to process the entire volume and copy the required incremental changes into the Backup Storage Location (BSL) is a heavyweight process. Creating an incremental qcow2 backup for the same volume is generally a much more lightweight action. We want to make use of the existing Velero backup/restore process, with actual libvirt backup/restore happening via a new controller. For the moment, this will be referred to as the Virt Backup Controller. This action will be coordinated with Velero via existing infrastructure -- BackupItemActions (BIAs), RestoreItemActions (RIAs) and the DataUpload/DataDownload CRs. Initial implementation should require minimal changes to Velero, since Velero currently ignores DataUploads with `Spec.DataMover` set to something other than `velero`.

## Goals

- Back up and restore VM volumes using libvirt incremental backup instead of Velero's built-in CSI datamover
- Use existing Velero infrastructure to integrate this feature into regular velero backup and restore
- Implementation based on kubevirt enhancement defined at <https://github.com/kubevirt/enhancements/blob/main/veps/sig-storage/incremental-backup.md>
- There is a follow-on design PR at <https://github.com/kubevirt/enhancements/pull/126/changes> although this mainly discusses push-mode, which is out of scope for the current design.

## Non goals
- Deep integration with velero data mover pluggability (this could be considered in the long-term though, which would minimize duplication of effort and enhance maintainability)
- Using push mode with kubevirt.

## Use cases
- As a user I want to use OADP to trigger backups that will back up volume data using libvirt tooling rather than CSI snapshots
- Volume backups will be incremental when possible (first backup for a given volume will be full, subsequent backups for that same volume will be incremental)
- Assuming that libvirt incremental volume backups should be much faster than CSI snapshots followed by incremental kopia snapshot copy, the expectation is that users might run libvirt-based OADP backups more frequently than they would for CSI-based backups.
- If users are backing up more frequently with this method, they should ensure that they are not using CSI snapshots or fs-backup via the resource/volume policy configuration in the backup.


## High-Level design

### Upstream velero changes
- Update velero volume policy model to allow unrecognized volume policy actions. Velero would treat unrecognized actions as "skip" (i.e. no snapshot or fs-backup), but the libvirt datamover could only act if the policy action is "cnv".
- In `pkg/restore/actions/dataupload_retrieve_action.go` and in `DataDownload` we need to add SnapshotType.

Comment on lines +29 to +32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify field naming: Go struct fields vs YAML fields (prevents implementation drift)
The doc mixes casings like Spec.DataMover / spec.DataMover and SnapshotMoveData in prose. Recommend explicitly stating whether you’re referring to Go API fields or YAML, and using consistent notation (e.g., spec.dataMover, spec.snapshotType, spec.snapshotMoveData) to avoid someone implementing the wrong field name.

Also applies to: 46-48, 54-56, 113-113

🤖 Prompt for AI Agents
In docs/design/virt-qcow2-datamover.md around lines 29 to 32 (also apply same
clarification at lines 46-48, 54-56, and 113), the prose mixes Go struct field
names and YAML/JSON field names (e.g., Spec.DataMover vs spec.DataMover vs
SnapshotMoveData); update the document to explicitly state which notation refers
to the Go API (PascalCase exported struct fields) and which refers to the
YAML/JSON manifest keys (camelCase/lowerCamel), then normalize examples and
prose to use a consistent YAML-style notation for manifest fields (e.g.,
spec.dataMover, spec.snapshotType, spec.snapshotMoveData) while retaining a note
mapping each YAML key to the corresponding Go struct field name (e.g.,
Spec.DataMover -> spec.dataMover) to prevent implementation drift.

### BackupItemAction/RestoreItemAction plugins
- VirtualMachineInstance BIA plugin
- This plugin will check to see whether QEMU backup is enabled in any of the disks -- whether `Spec.Domain.Devices.Disk[i].CBT` is `true` for at least one disk. If none are enabled, then the plugin exits without action (question: is this still valid, or is it replaced by the next?)
- The plugin will check whether the VirtualMachineInstance's `status.ChangedBlockTracking` is `Enabled`
- The plugin must also determine whether the VM is running, since offline backup is not supported in the initial release.
- If QEMU backup is enabled, the next question is whether the user wants to use the virt datamover for this VM's volumes. We will use volume policies for this, although it's a bit more complicated since a VM could have multiple PVCs. If at least one PVC for the VM has the "cnv" policy action specified, and no PVCs in this VM have other non-skip policies (i.e. "snapshot", etc.) then we'll use the virt datamover
- Iterate over all PVCs for the VM
- If any PVC has an action other than "cnv" or "skip", exit without action
- If at least one PVC has an action of "cnv", then use the virt datamover
- A temporary PVC is created which will be used to store the qcow2 images until they are copied into the BSL.
- An annotation will be added to the PVC identifying the VirtualMachineInstance we're backing up.
- This PVC probably doesn't need to be included in the backup -- if it turns out that it will be needed for restore purposes, it should be backed up *without* snapshotting or other PVC content backup. Whether we need this on restore will determine whether the PVC is added to the `additionalItems` return.
- Note: we may need to move PVC creation into the controller (see open questions below)
- This plugin will create a DataUpload with `Spec.SnapshotType` set to "qcow2" and `Spec.DataMover` set to "cnv"
- Add `velerov1api.DataUploadNameAnnotation` to VirtualMachineInstance
- OperationID will be created and returned similar to what's done with the CSI PVC plugin, and the async operation Progress method will report on progress based on the DU status (similar to CSI PVC plugin)
- VirtualMachineInstance RIA plugin
- Similar in functionality to csi PVC restore action
- Create temporary PVC
- Create DD based on DU annotation and DU ConfigMap

### Virt Backup Controller
- Responsible for reconciling DataUploads/DataDownloads where `Spec.DataMover` is "cnv"
- DataUpload reconciler (backup):
- identify the (temporary) PVC from the DU metadata
- identify the VirtualMachineInstance from the PVC metadata.
- identify the already-existing qcow2 files that this incremental backup will be based on
- create the VirtualMachineBackup CR with `source` set to the VirtualMachineInstance, `pvcName` set to the temporary PVC, and `backupTracker` set to the name of a `VirtualMachineBackupTracker` referencing the most recent backup checkpoint, and (optionally)`forceFullBackup`, set to `true` to force a full backup
- Wait for VMBackup to complete (monitoring status)
- Launch virt datamover pod mounting the temporary PVC with the qcow2 file(s) from the backup.
- This pod needs to be running a command that will do the datamover operation from pvc to object storage
- Will there be multiple qcow2 files if there are multiple disks with CBT enabled? Will these all use the same checkpoint name?
- Copy the new file to object storage (see [Where to store qcow2 files](#wherehow-to-store-qcow2-files-and-metadata) below)
- Save any required metadata to identify the stored data (collection of qcow2 pathnames/checkpoints, etc.), along with identifying the backup and VirtualMachineInstance they're associated with. Save this metadata file as well (see [Where to store qcow2 files](#wherehow-to-store-qcow2-files-and-metadata) below)
- The VirtualMachineBackupTracker CR will be helpful here.
- We need to properly handle cases where we attempt an incremental backup but a full backup is taken instead (checkpoint lost, CSI snapshot restore since last checkpoint, VM restart, etc.)
- Aborted backups also need to be handled (resulting in a failed PVC backup on the Velero side)
- DataDownload reconciler (restore)
- (this area is less well defined so far, since the kubevirt enhancement doesn't go into as much detail on restore)
- We will need a temporary PVC for pulling qcow2 images from object store (if we're restoring the empty temp PVC from backup, that might work here)
- We also need PVCs created for each VM disk we're restoring from qcow2 images.
- We'll need to create another datamover pod here which will do the following:
- pod will have temp PVC mounted, as well as PVCs mounted for each vm disk we're creating.
- pod running command/image will first get the list of qcow2 files to pull from object storage
- once we have the qcow2 full backup and incremental files from object store, repeatedly call `qemu-img rebase -b fullbackup.qcow2 -f qcow2 -u incrementalN.qcow2` for each incremental backup, in order
- Then, convert qcow2 image to raw disk image: `qemu-img convert -f qcow2 -O raw fullbackup.qcow2 restored-raw.img`
- Finally, write this image to the PVC which contains the VM disk
- (repeat process for each disk if VM has multiple disks to restore)
- Note that the various `qemu-img` actions might eventually be combined into a single kubevirt API call, but for the moment this would need to be done manually.
- Once datamover pod has restored the VM disks, it will exit and the VirtualMachineInstance can launch with these disks (following the velero datamover model where the temporary PVC is deleted, retaining the PV, which then binds to the VM's PVCs may work here). The temporary PVC (containing the qcow2 images, not the restored VM disk image) should be completely removed at this point, including PV content.

Comment on lines +70 to +83
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

qemu-img rebase -u unsafe flag documentation

💡 Result:

Summary — what -u (unsafe) does

  • qemu-img rebase -u switches to "unsafe" mode: it only changes the backing-file name/format in the image metadata without verifying or merging any data. Using it can corrupt the guest-visible contents if the new backing file is incorrect. Use only for renaming/moving a backing file or fixing an image whose backing file is already unavailable. [1][2]

Risks and recommendation

  • Unsafe is fast but dangerous: no content checks or merges are performed; the user is responsible for correctness. For safety, prefer the default (safe) rebase when the old backing file exists. [1][2]
  • QEMU projects recommend specifying the backing format explicitly (e.g., -o backing_fmt=...) when creating or changing backing files to avoid format-probing risks; you can use qemu-img rebase -u to add a backing_fmt to an existing image but be aware of security implications. [3]

Quick example

  • qemu-img rebase -u -b /path/to/new-back.img image.qcow2 (unsafe: just update metadata). [1][2]

Sources
[1] QEMU qemu-img docs (rebase — safe vs unsafe).
[2] qemu-img manpage / Debian QEMU utils.
[3] QEMU deprecated-features / backing-file format guidance.


🌐 Web query:

qemu-img rebase backing file format -F flag

💡 Result:

-F (or --backing-fmt) tells qemu-img the format of the backing file (BACKING_FMT) when creating or rebasing an image (used together with -b BACKING_FILE). It is needed when the backing file’s format can’t be autodetected (or you want to override detection). Only formats that support backing files (qcow2 and qed for rebase) can have their backing changed. Example: qemu-img rebase -b base.qcow2 -F qcow2 child.qcow2 (safe mode by default; use -u for unsafe). [1][2][3]

Sources:

  • QEMU qemu-img documentation (rebase, -F). [1]
  • QEMU qemu-img manual (rebase synopsis). [2]
  • qemu-img manpage / distro docs noting formats that support backing files. [3]

🏁 Script executed:

# Search for any qemu-img documentation or references in the repository
git ls-files | head -20

Repository: openshift/oadp-operator

Length of output: 486


🌐 Web query:

qemu-img rebase incremental backup restore best practices

💡 Result:

Key best practices for qemu-img rebase with incremental (qcow2) backups

  • Use qcow2 and QEMU dirty‑bitmaps for incremental backups (point‑in‑time, efficient deltas). Test your QEMU/libvirt versions support bitmaps. [2][5]
  • Keep a known-good full backup as the chain base; create each incremental as a qcow2 with the previous image as its backing file. [3][2]
  • Avoid long backing chains — limit incremental chain length and periodically consolidate (merge) incrementals into a new full image to reduce restore time and corruption risk. [4][5]
  • Never run destructive in‑place rebases on your only copy. Work on a copy of the backup folder or use qmprestore/merge to produce a new image so originals remain intact. [4]
  • Prefer qemu-img rebase safe mode (default) when the old backing file exists — it preserves guest-visible contents by merging needed clusters. Only use unsafe (-u) when you are certain the backing chain is correct (e.g., renames/moves). [1][3]
  • For restores, set the backing chain explicitly (or use qemu-img rebase/merge) and verify final image with qemu-img check and by booting/test-mounting before production use. [1][4]
  • Ensure bitmap handling and backup timing: incremental backups capture blocks modified up to backup start; writes after start may be tracked separately — follow bitmap lifecycle to avoid missed changes. [2][4]
  • Store bitmaps/metadata and backup manifests together, make backups read‑only where possible, and keep consistent naming with timestamps and sequence numbers to avoid rebase mistakes. [4][5]
  • Automate and test restores regularly (including rolling back to specific incremental points); document exact qemu/qemu-img versions used because behavior/options can change across versions. [1][2][5]

Primary references

  • qemu-img rebase documentation and modes (safe vs unsafe). [1]
  • QEMU Dirty Bitmaps / incremental backup guide. [2][3]
  • qmpbackup/virtnbdbackup restore and rebase recommendations (practical tooling and safety suggestions). [4][5]

🌐 Web query:

qemu-img rebase safe mode incremental chain without -u flag

💡 Result:

Short answer: Yes — qemu-img rebase will run in safe mode (the default) without -u, and will correctly rebase an incremental (backing-file) chain — but it requires the original backing files to be present and will merge differing clusters into the image (an expensive operation). [1][2]

Quick guidance:

  • Safe mode is default: qemu-img rebase -b NEW_BASE child.qcow2 (no -u). qemu-img will compare the old backing file and NEW_BASE, merging any clusters that differ into child before changing the backing pointer. This preserves guest-visible content. [1][2]
  • Requirements/limits: the old backing file must be accessible; operation is comparable to a convert in IO/CPU and can be slow for large images. If the old backing is missing, safe mode cannot complete. [1][3]
  • Unsafe mode (-u) only changes the backing-file pointer without merging — fast but can corrupt guest view if the specified backing is wrong or incompatible. Use -u only when you know the files/paths are correct (e.g., renames/moves). [1][2]

If you want a suggested command for a chain (example): rebase each overlay in turn, starting from the top overlay you want to retarget:

  • qemu-img rebase -b /path/to/new-base top-overlay.qcow2
    Repeat for subsequent overlays as needed (ensuring their old backing files are present). Alternatively, use qemu-img convert to flatten an image if you want a standalone file. [1][2]

Sources: QEMU qemu-img documentation (rebase — safe vs unsafe). [1] QEMU qemu-img rebase docs. [2] qemu-img manpages. [3]


Remove -u (unsafe) flag from qemu-img rebase command; use safe rebase with explicit backing format

In a restore scenario, the unsafe flag -u can silently corrupt the backing chain if files differ. Since the old backing files will be available during restore, safe rebase (the default) is both appropriate and necessary. Replace the command with qemu-img rebase -b fullbackup.qcow2 -F qcow2 -f qcow2 incrementalN.qcow2 to explicitly specify backing format and rely on safe merging of any differing clusters.

🧰 Tools
🪛 LanguageTool

[grammar] ~77-~77: Ensure spelling is correct
Context: ...ackup and incremental files from object store, repeatedly call `qemu-img rebase -b fu...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
In docs/design/virt-qcow2-datamover.md around lines 70 to 83, update the
qemu-img rebase command used during restore: remove the unsafe -u flag and
replace the call with an explicit safe rebase invocation that specifies the
backing format and input format (e.g. use qemu-img rebase -b fullbackup.qcow2 -F
qcow2 -f qcow2 incrementalN.qcow2); ensure all occurrences and any documentation
text referencing the unsafe flag are updated accordingly.


### Where/how to store qcow2 files and metadata
- Current approach:
- Use the Velero object store plugin API but not the velero-specific logic in `persistence/object-store.go`
- Create a top-level dir in the BSL (under the BSL prefix, parallel to backups/restores/kopia) for libvirt datamover.
- Actually, this may have to be outside the prefix (i.e. if prefix is "data" then we may need to create a parallel dir "data-qcow2" or something similar, since I think Velero allows only its own files under the prefix)
- Copy individual qcow2 files and metadata files identifying the required qcow2 checkpoints. We may want a subdir per VirtualMachineInstance for qcow2 files. For metadata files, these should probably be organized by velero backup.
- We need to manage storage usage on backup deletion -- when deleting a backup, we should delete any qcow2 files no longer referenced by any still-existing backups.
- Other approaches:
- On volume in cluster
- Likely the simplest approach
- Volume must be mounted by the controller pod
- Will require its own periodic velero backups (less frequently than the VM incremental backups) for disaster recovery purposes
- Using kopia
- We could use kopia on top of the object storage API, but it is not clear that this will provide any real benefits, since we're already working with files that represent just the data diff we need. We can just manage them as individual objects.
- This will also require additional overhead around kopia maintenance, and we still may need to manage qcow2 file deletion manually.

### Open questions
- How to determine PVC size?
- user-configurable? configmap or annotation?
- From the kubevirt enhancement: "Before the process begins, an estimation of the required backup size will be performed. If the provided PVC size is insufficient, an error will be returned"
- If the PVC is too small, we need a clear error on the backup indicating that it failed due to insufficient PVC space.
- If controller is responsible for PVC creation rather than plugin, then the controller may be able to respond to PVC too small errors by retrying with a larger PVC.
- The virt datamover controller will be responsible for deleting the `VirtualMachineBackup` resource once it's no longer needed. When should this happen? Upon velero backup deletion? This would enable debugging in the case of failed operations. If we delete it immediately, that will make troubleshooting more difficult. If on backup deletion, we'll need to write a `DeleteItemAction` plugin.
- When a backup is deleted, its snapshots may be needed by more recent incremental backups. What ever means we're using to map snapshots to backups will need to update this mapping to associate the deleted backup's snapshots with the next most recent backup that relies on these files. This means that for a given backup and volume we could have more than one snapshot file. We need to handle this mapping, including the order of application.
- Do we need an option to force full backups? If we're always doing incremental, eventually the incremental backup list becomes really long, requiring applying possibly hundreds of incremental files for a single restore.
- For initial release, we can add a force-full-virt-backup annotation on the velero backup. Longer-term, we can push for a general datamover feature in velero which could force full backups for both fs-backup and velero datamover if backup.Spec.ForceFullVolumeBackup is true, and once implemented, the qcow2 datamover can use this as well.

### General notes
- SnapshotMoveData must be true on the backup or DU/DD processing won't work properly
- Longer-term, we can probably eliminate some of the custom code in the new controller by refactoring the velero datamover pluggability features, allowing the node agent to orchestrate this (with a custom image and configuration for datamover pods, etc.)
- The kubevirt enhancement references both "push mode" and "pull mode" -- initial implementation on the kubevirt side will be push mode only. This OADP proposal is also push mode only for the initial implementation