qemu with hax to log dma reads & writes jcs.org/2018/11/12/vfio

backup: Make sure that source and target size match

Since the introduction of a backup filter node in commit 00e30f05d, the
backup block job crashes when the target image is smaller than the
source image because it will try to write after the end of the target
node without having BLK_PERM_RESIZE. (Previously, the BlockBackend layer
would have caught this and errored out gracefully.)

We can fix this and even do better than the old behaviour: Check that
source and target have the same image size at the start of the block job
and unshare BLK_PERM_RESIZE. (This permission was already unshared
before the same commit 00e30f05d, but the BlockBackend that was used to
make the restriction was removed without a replacement.) This will
immediately error out when starting the job instead of only when writing
to a block that doesn't exist in the target.

Longer target than source would technically work because we would never
write to blocks that don't exist, but semantically these are invalid,
too, because a backup is supposed to create a copy, not just an image
that starts with a copy.

Fixes: 00e30f05de1d19586345ec373970ef4c192c6270
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1778593
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200430142755.315494-4-kwolf@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

+22 -6
+9 -5
block/backup-top.c
··· 148 148 * 149 149 * Share write to target (child_file), to not interfere 150 150 * with guest writes to its disk which may be in target backing chain. 151 + * Can't resize during a backup block job because we check the size 152 + * only upfront. 151 153 */ 152 - *nshared = BLK_PERM_ALL; 154 + *nshared = BLK_PERM_ALL & ~BLK_PERM_RESIZE; 153 155 *nperm = BLK_PERM_WRITE; 154 156 } else { 155 157 /* Source child */ ··· 159 161 if (perm & BLK_PERM_WRITE) { 160 162 *nperm = *nperm | BLK_PERM_CONSISTENT_READ; 161 163 } 162 - *nshared &= ~BLK_PERM_WRITE; 164 + *nshared &= ~(BLK_PERM_WRITE | BLK_PERM_RESIZE); 163 165 } 164 166 } 165 167 ··· 192 194 { 193 195 Error *local_err = NULL; 194 196 BDRVBackupTopState *state; 195 - BlockDriverState *top = bdrv_new_open_driver(&bdrv_backup_top_filter, 196 - filter_node_name, 197 - BDRV_O_RDWR, errp); 197 + BlockDriverState *top; 198 198 bool appended = false; 199 199 200 + assert(source->total_sectors == target->total_sectors); 201 + 202 + top = bdrv_new_open_driver(&bdrv_backup_top_filter, filter_node_name, 203 + BDRV_O_RDWR, errp); 200 204 if (!top) { 201 205 return NULL; 202 206 }
+13 -1
block/backup.c
··· 340 340 BlockCompletionFunc *cb, void *opaque, 341 341 JobTxn *txn, Error **errp) 342 342 { 343 - int64_t len; 343 + int64_t len, target_len; 344 344 BackupBlockJob *job = NULL; 345 345 int64_t cluster_size; 346 346 BdrvRequestFlags write_flags; ··· 402 402 if (len < 0) { 403 403 error_setg_errno(errp, -len, "Unable to get length for '%s'", 404 404 bdrv_get_device_or_node_name(bs)); 405 + goto error; 406 + } 407 + 408 + target_len = bdrv_getlength(target); 409 + if (target_len < 0) { 410 + error_setg_errno(errp, -target_len, "Unable to get length for '%s'", 411 + bdrv_get_device_or_node_name(bs)); 412 + goto error; 413 + } 414 + 415 + if (target_len != len) { 416 + error_setg(errp, "Source and target image have different sizes"); 405 417 goto error; 406 418 } 407 419