qemu with hax to log dma reads & writes jcs.org/2018/11/12/vfio

colo: Update Documentation for continuous replication

Document the qemu command-line and qmp commands for continuous replication

Signed-off-by: Lukas Straub <lukasstraub2@web.de>
Signed-off-by: Jason Wang <jasowang@redhat.com>

authored by

Lukas Straub and committed by
Jason Wang
90dfe59b 19731365

+183 -67
+165 -57
docs/COLO-FT.txt
··· 145 145 in test procedure. 146 146 147 147 == Test procedure == 148 - 1. Startup qemu 149 - Primary: 150 - # qemu-system-x86_64 -accel kvm -m 2048 -smp 2 -qmp stdio -name primary \ 151 - -device piix3-usb-uhci -vnc :7 \ 152 - -device usb-tablet -netdev tap,id=hn0,vhost=off \ 153 - -device virtio-net-pci,id=net-pci0,netdev=hn0 \ 154 - -drive if=virtio,id=primary-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ 155 - children.0.file.filename=1.raw,\ 156 - children.0.driver=raw -S 157 - Secondary: 158 - # qemu-system-x86_64 -accel kvm -m 2048 -smp 2 -qmp stdio -name secondary \ 159 - -device piix3-usb-uhci -vnc :7 \ 160 - -device usb-tablet -netdev tap,id=hn0,vhost=off \ 161 - -device virtio-net-pci,id=net-pci0,netdev=hn0 \ 162 - -drive if=none,id=secondary-disk0,file.filename=1.raw,driver=raw,node-name=node0 \ 163 - -drive if=virtio,id=active-disk0,driver=replication,mode=secondary,\ 164 - file.driver=qcow2,top-id=active-disk0,\ 165 - file.file.filename=/mnt/ramfs/active_disk.img,\ 166 - file.backing.driver=qcow2,\ 167 - file.backing.file.filename=/mnt/ramfs/hidden_disk.img,\ 168 - file.backing.backing=secondary-disk0 \ 169 - -incoming tcp:0:8888 148 + Note: Here we are running both instances on the same host for testing, 149 + change the IP Addresses if you want to run it on two hosts. Initally 150 + 127.0.0.1 is the Primary Host and 127.0.0.2 is the Secondary Host. 151 + 152 + == Startup qemu == 153 + 1. Primary: 154 + Note: Initally, $imagefolder/primary.qcow2 needs to be copied to all hosts. 155 + You don't need to change any IP's here, because 0.0.0.0 listens on any 156 + interface. The chardev's with 127.0.0.1 IP's loopback to the local qemu 157 + instance. 158 + 159 + # imagefolder="/mnt/vms/colo-test-primary" 160 + 161 + # qemu-system-x86_64 -enable-kvm -cpu qemu64,+kvmclock -m 512 -smp 1 -qmp stdio \ 162 + -device piix3-usb-uhci -device usb-tablet -name primary \ 163 + -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \ 164 + -device rtl8139,id=e0,netdev=hn0 \ 165 + -chardev socket,id=mirror0,host=0.0.0.0,port=9003,server,nowait \ 166 + -chardev socket,id=compare1,host=0.0.0.0,port=9004,server,wait \ 167 + -chardev socket,id=compare0,host=127.0.0.1,port=9001,server,nowait \ 168 + -chardev socket,id=compare0-0,host=127.0.0.1,port=9001 \ 169 + -chardev socket,id=compare_out,host=127.0.0.1,port=9005,server,nowait \ 170 + -chardev socket,id=compare_out0,host=127.0.0.1,port=9005 \ 171 + -object filter-mirror,id=m0,netdev=hn0,queue=tx,outdev=mirror0 \ 172 + -object filter-redirector,netdev=hn0,id=redire0,queue=rx,indev=compare_out \ 173 + -object filter-redirector,netdev=hn0,id=redire1,queue=rx,outdev=compare0 \ 174 + -object iothread,id=iothread1 \ 175 + -object colo-compare,id=comp0,primary_in=compare0-0,secondary_in=compare1,\ 176 + outdev=compare_out0,iothread=iothread1 \ 177 + -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ 178 + children.0.file.filename=$imagefolder/primary.qcow2,children.0.driver=qcow2 -S 179 + 180 + 2. Secondary: 181 + Note: Active and hidden images need to be created only once and the 182 + size should be the same as primary.qcow2. Again, you don't need to change 183 + any IP's here, except for the $primary_ip variable. 184 + 185 + # imagefolder="/mnt/vms/colo-test-secondary" 186 + # primary_ip=127.0.0.1 187 + 188 + # qemu-img create -f qcow2 $imagefolder/secondary-active.qcow2 10G 189 + 190 + # qemu-img create -f qcow2 $imagefolder/secondary-hidden.qcow2 10G 191 + 192 + # qemu-system-x86_64 -enable-kvm -cpu qemu64,+kvmclock -m 512 -smp 1 -qmp stdio \ 193 + -device piix3-usb-uhci -device usb-tablet -name secondary \ 194 + -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \ 195 + -device rtl8139,id=e0,netdev=hn0 \ 196 + -chardev socket,id=red0,host=$primary_ip,port=9003,reconnect=1 \ 197 + -chardev socket,id=red1,host=$primary_ip,port=9004,reconnect=1 \ 198 + -object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \ 199 + -object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \ 200 + -object filter-rewriter,id=rew0,netdev=hn0,queue=all \ 201 + -drive if=none,id=parent0,file.filename=$imagefolder/primary.qcow2,driver=qcow2 \ 202 + -drive if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,\ 203 + top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,\ 204 + file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,\ 205 + file.backing.backing=parent0 \ 206 + -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ 207 + children.0=childs0 \ 208 + -incoming tcp:0.0.0.0:9998 209 + 170 210 171 - 2. On Secondary VM's QEMU monitor, issue command 211 + 3. On Secondary VM's QEMU monitor, issue command 172 212 {'execute':'qmp_capabilities'} 173 - { 'execute': 'nbd-server-start', 174 - 'arguments': {'addr': {'type': 'inet', 'data': {'host': 'xx.xx.xx.xx', 'port': '8889'} } } 175 - } 176 - {'execute': 'nbd-server-add', 'arguments': {'device': 'secondary-disk0', 'writable': true } } 213 + {'execute': 'nbd-server-start', 'arguments': {'addr': {'type': 'inet', 'data': {'host': '0.0.0.0', 'port': '9999'} } } } 214 + {'execute': 'nbd-server-add', 'arguments': {'device': 'parent0', 'writable': true } } 177 215 178 216 Note: 179 217 a. The qmp command nbd-server-start and nbd-server-add must be run 180 218 before running the qmp command migrate on primary QEMU 181 219 b. Active disk, hidden disk and nbd target's length should be the 182 220 same. 183 - c. It is better to put active disk and hidden disk in ramdisk. 221 + c. It is better to put active disk and hidden disk in ramdisk. They 222 + will be merged into the parent disk on failover. 184 223 185 - 3. On Primary VM's QEMU monitor, issue command: 224 + 4. On Primary VM's QEMU monitor, issue command: 186 225 {'execute':'qmp_capabilities'} 187 - { 'execute': 'human-monitor-command', 188 - 'arguments': {'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=xx.xx.xx.xx,file.port=8889,file.export=secondary-disk0,node-name=nbd_client0'}} 189 - { 'execute':'x-blockdev-change', 'arguments':{'parent': 'primary-disk0', 'node': 'nbd_client0' } } 190 - { 'execute': 'migrate-set-capabilities', 191 - 'arguments': {'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } 192 - { 'execute': 'migrate', 'arguments': {'uri': 'tcp:xx.xx.xx.xx:8888' } } 226 + {'execute': 'human-monitor-command', 'arguments': {'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0'}} 227 + {'execute': 'x-blockdev-change', 'arguments':{'parent': 'colo-disk0', 'node': 'replication0' } } 228 + {'execute': 'migrate-set-capabilities', 'arguments': {'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } 229 + {'execute': 'migrate', 'arguments': {'uri': 'tcp:127.0.0.2:9998' } } 193 230 194 231 Note: 195 232 a. There should be only one NBD Client for each primary disk. 196 - b. xx.xx.xx.xx is the secondary physical machine's hostname or IP 197 - c. The qmp command line must be run after running qmp command line in 233 + b. The qmp command line must be run after running qmp command line in 198 234 secondary qemu. 199 235 200 - 4. After the above steps, you will see, whenever you make changes to PVM, SVM will be synced. 236 + 5. After the above steps, you will see, whenever you make changes to PVM, SVM will be synced. 201 237 You can issue command '{ "execute": "migrate-set-parameters" , "arguments":{ "x-checkpoint-delay": 2000 } }' 202 - to change the checkpoint period time 238 + to change the idle checkpoint period time 239 + 240 + 6. Failover test 241 + You can kill one of the VMs and Failover on the surviving VM: 242 + 243 + If you killed the Secondary, then follow "Primary Failover". After that, 244 + if you want to resume the replication, follow "Primary resume replication" 245 + 246 + If you killed the Primary, then follow "Secondary Failover". After that, 247 + if you want to resume the replication, follow "Secondary resume replication" 248 + 249 + == Primary Failover == 250 + The Secondary died, resume on the Primary 251 + 252 + {'execute': 'x-blockdev-change', 'arguments':{ 'parent': 'colo-disk0', 'child': 'children.1'} } 253 + {'execute': 'human-monitor-command', 'arguments':{ 'command-line': 'drive_del replication0' } } 254 + {'execute': 'object-del', 'arguments':{ 'id': 'comp0' } } 255 + {'execute': 'object-del', 'arguments':{ 'id': 'iothread1' } } 256 + {'execute': 'object-del', 'arguments':{ 'id': 'm0' } } 257 + {'execute': 'object-del', 'arguments':{ 'id': 'redire0' } } 258 + {'execute': 'object-del', 'arguments':{ 'id': 'redire1' } } 259 + {'execute': 'x-colo-lost-heartbeat' } 260 + 261 + == Secondary Failover == 262 + The Primary died, resume on the Secondary and prepare to become the new Primary 263 + 264 + {'execute': 'nbd-server-stop'} 265 + {'execute': 'x-colo-lost-heartbeat'} 266 + 267 + {'execute': 'object-del', 'arguments':{ 'id': 'f2' } } 268 + {'execute': 'object-del', 'arguments':{ 'id': 'f1' } } 269 + {'execute': 'chardev-remove', 'arguments':{ 'id': 'red1' } } 270 + {'execute': 'chardev-remove', 'arguments':{ 'id': 'red0' } } 271 + 272 + {'execute': 'chardev-add', 'arguments':{ 'id': 'mirror0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '0.0.0.0', 'port': '9003' } }, 'server': true } } } } 273 + {'execute': 'chardev-add', 'arguments':{ 'id': 'compare1', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '0.0.0.0', 'port': '9004' } }, 'server': true } } } } 274 + {'execute': 'chardev-add', 'arguments':{ 'id': 'compare0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9001' } }, 'server': true } } } } 275 + {'execute': 'chardev-add', 'arguments':{ 'id': 'compare0-0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9001' } }, 'server': false } } } } 276 + {'execute': 'chardev-add', 'arguments':{ 'id': 'compare_out', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9005' } }, 'server': true } } } } 277 + {'execute': 'chardev-add', 'arguments':{ 'id': 'compare_out0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9005' } }, 'server': false } } } } 278 + 279 + == Primary resume replication == 280 + Resume replication after new Secondary is up. 281 + 282 + Start the new Secondary (Steps 2 and 3 above), then on the Primary: 283 + {'execute': 'drive-mirror', 'arguments':{ 'device': 'colo-disk0', 'job-id': 'resync', 'target': 'nbd://127.0.0.2:9999/parent0', 'mode': 'existing', 'format': 'raw', 'sync': 'full'} } 284 + 285 + Wait until disk is synced, then: 286 + {'execute': 'stop'} 287 + {'execute': 'block-job-cancel', 'arguments':{ 'device': 'resync'} } 288 + 289 + {'execute': 'human-monitor-command', 'arguments':{ 'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0'}} 290 + {'execute': 'x-blockdev-change', 'arguments':{ 'parent': 'colo-disk0', 'node': 'replication0' } } 291 + 292 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-mirror', 'id': 'm0', 'props': { 'netdev': 'hn0', 'queue': 'tx', 'outdev': 'mirror0' } } } 293 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire0', 'props': { 'netdev': 'hn0', 'queue': 'rx', 'indev': 'compare_out' } } } 294 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire1', 'props': { 'netdev': 'hn0', 'queue': 'rx', 'outdev': 'compare0' } } } 295 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'iothread', 'id': 'iothread1' } } 296 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'colo-compare', 'id': 'comp0', 'props': { 'primary_in': 'compare0-0', 'secondary_in': 'compare1', 'outdev': 'compare_out0', 'iothread': 'iothread1' } } } 203 297 204 - 5. Failover test 205 - You can kill Primary VM and run 'x_colo_lost_heartbeat' in Secondary VM's 206 - monitor at the same time, then SVM will failover and client will not detect this 207 - change. 298 + {'execute': 'migrate-set-capabilities', 'arguments':{ 'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } 299 + {'execute': 'migrate', 'arguments':{ 'uri': 'tcp:127.0.0.2:9998' } } 300 + 301 + Note: 302 + If this Primary previously was a Secondary, then we need to insert the 303 + filters before the filter-rewriter by using the 304 + "'insert': 'before', 'position': 'id=rew0'" Options. See below. 305 + 306 + == Secondary resume replication == 307 + Become Primary and resume replication after new Secondary is up. Note 308 + that now 127.0.0.1 is the Secondary and 127.0.0.2 is the Primary. 309 + 310 + Start the new Secondary (Steps 2 and 3 above, but with primary_ip=127.0.0.2), 311 + then on the old Secondary: 312 + {'execute': 'drive-mirror', 'arguments':{ 'device': 'colo-disk0', 'job-id': 'resync', 'target': 'nbd://127.0.0.1:9999/parent0', 'mode': 'existing', 'format': 'raw', 'sync': 'full'} } 313 + 314 + Wait until disk is synced, then: 315 + {'execute': 'stop'} 316 + {'execute': 'block-job-cancel', 'arguments':{ 'device': 'resync' } } 317 + 318 + {'execute': 'human-monitor-command', 'arguments':{ 'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.1,file.port=9999,file.export=parent0,node-name=replication0'}} 319 + {'execute': 'x-blockdev-change', 'arguments':{ 'parent': 'colo-disk0', 'node': 'replication0' } } 208 320 209 - Before issuing '{ "execute": "x-colo-lost-heartbeat" }' command, we have to 210 - issue block related command to stop block replication. 211 - Primary: 212 - Remove the nbd child from the quorum: 213 - { 'execute': 'x-blockdev-change', 'arguments': {'parent': 'colo-disk0', 'child': 'children.1'}} 214 - { 'execute': 'human-monitor-command','arguments': {'command-line': 'drive_del blk-buddy0'}} 215 - Note: there is no qmp command to remove the blockdev now 321 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-mirror', 'id': 'm0', 'props': { 'insert': 'before', 'position': 'id=rew0', 'netdev': 'hn0', 'queue': 'tx', 'outdev': 'mirror0' } } } 322 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire0', 'props': { 'insert': 'before', 'position': 'id=rew0', 'netdev': 'hn0', 'queue': 'rx', 'indev': 'compare_out' } } } 323 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire1', 'props': { 'insert': 'before', 'position': 'id=rew0', 'netdev': 'hn0', 'queue': 'rx', 'outdev': 'compare0' } } } 324 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'iothread', 'id': 'iothread1' } } 325 + {'execute': 'object-add', 'arguments':{ 'qom-type': 'colo-compare', 'id': 'comp0', 'props': { 'primary_in': 'compare0-0', 'secondary_in': 'compare1', 'outdev': 'compare_out0', 'iothread': 'iothread1' } } } 216 326 217 - Secondary: 218 - The primary host is down, so we should do the following thing: 219 - { 'execute': 'nbd-server-stop' } 327 + {'execute': 'migrate-set-capabilities', 'arguments':{ 'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } 328 + {'execute': 'migrate', 'arguments':{ 'uri': 'tcp:127.0.0.1:9998' } } 220 329 221 330 == TODO == 222 - 1. Support continuous VM replication. 223 - 2. Support shared storage. 224 - 3. Develop the heartbeat part. 225 - 4. Reduce checkpoint VM’s downtime while doing checkpoint. 331 + 1. Support shared storage. 332 + 2. Develop the heartbeat part. 333 + 3. Reduce checkpoint VM’s downtime while doing checkpoint.
+18 -10
docs/block-replication.txt
··· 65 65 ^ || .---------- 66 66 | || | Secondary 67 67 1 Quorum || '---------- 68 - / \ || 69 - / \ || 70 - Primary 2 filter 71 - disk ^ virtio-blk 72 - | ^ 73 - 3 NBD -------> 3 NBD | 68 + / \ || virtio-blk 69 + / \ || ^ 70 + Primary 2 filter | 71 + disk ^ 7 Quorum 72 + | / 73 + 3 NBD -------> 3 NBD / 74 74 client || server 2 filter 75 75 || ^ ^ 76 76 --------. || | | ··· 106 106 of the NBD server into the secondary disk. So before block replication, 107 107 the primary disk and secondary disk should contain the same data. 108 108 109 + 7) The secondary also has a quorum node, so after secondary failover it 110 + can become the new primary and continue replication. 111 + 112 + 109 113 == Failure Handling == 110 114 There are 7 internal errors when block replication is running: 111 115 1. I/O error on primary disk ··· 171 175 leading whitespace. 172 176 5. The qmp command line must be run after running qmp command line in 173 177 secondary qemu. 174 - 6. After failover we need remove children.1 (replication driver). 178 + 6. After primary failover we need remove children.1 (replication driver). 175 179 176 180 Secondary: 177 181 -drive if=none,driver=raw,file.filename=1.raw,id=colo1 \ 178 - -drive if=xxx,id=topxxx,driver=replication,mode=secondary,top-id=topxxx\ 182 + -drive if=none,id=childs1,driver=replication,mode=secondary,top-id=childs1 179 183 file.file.filename=active_disk.qcow2,\ 180 184 file.driver=qcow2,\ 181 185 file.backing.file.filename=hidden_disk.qcow2,\ 182 186 file.backing.driver=qcow2,\ 183 187 file.backing.backing=colo1 188 + -drive if=xxx,driver=quorum,read-pattern=fifo,id=top-disk1,\ 189 + vote-threshold=1,children.0=childs1 184 190 185 191 Then run qmp command in secondary qemu: 186 192 { 'execute': 'nbd-server-start', ··· 234 240 The primary host is down, so we should do the following thing: 235 241 { 'execute': 'nbd-server-stop' } 236 242 243 + Promote Secondary to Primary: 244 + see COLO-FT.txt 245 + 237 246 TODO: 238 - 1. Continuous block replication 239 - 2. Shared disk 247 + 1. Shared disk