qemu with hax to log dma reads & writes jcs.org/2018/11/12/vfio

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* Make checkpatch say 'qemu' instead of 'kernel' (Aleksandar)
* Fix PSE guests with emulated NPT (Alexander B. #1)
* Fix leak (Alexander B. #2)
* HVF fixes (Roman, Cameron)
* New Sapphire Rapids CPUID bits (Cathy)
* cpus.c and softmmu/ cleanups (Claudio)
* TAP driver tweaks (Daniel, Havard)
* object-add bugfix and testcases (Eric A.)
* Fix Coverity MIN_CONST and MAX_CONST (Eric B.)
* "info lapic" improvement (Jan)
* SSE fixes (Joseph)
* "-msg guest-name" option (Mario)
* support for AMD nested live migration (myself)
* Small i386 TCG fixes (myself)
* improved error reporting for Xen (myself)
* fix "-cpu host -overcommit cpu-pm=on" (myself)
* Add accel/Kconfig (Philippe)
* iscsi sense handling fixes (Yongji)
* Misc bugfixes

# gpg: Signature made Sat 11 Jul 2020 00:33:41 BST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (47 commits)
linux-headers: update again to 5.8
apic: Report current_count via 'info lapic'
scripts: improve message when TAP based tests fail
target/i386: Enable TSX Suspend Load Address Tracking feature
target/i386: Add SERIALIZE cpu feature
softmmu/vl: Remove the check for colons in -accel parameters
cpu-throttle: new module, extracted from cpus.c
softmmu: move softmmu only files from root
pc: fix leak in pc_system_flash_cleanup_unused
cpus: Move CPU code from exec.c to cpus-common.c
target/i386: Correct the warning message of Intel PT
checkpatch: Change occurences of 'kernel' to 'qemu' in user messages
iscsi: return -EIO when sense fields are meaningless
iscsi: handle check condition status in retry loop
target/i386: sev: fail query-sev-capabilities if QEMU cannot use SEV
target/i386: sev: provide proper error reporting for query-sev-capabilities
KVM: x86: believe what KVM says about WAITPKG
target/i386: implement undocumented "smsw r32" behavior
target/i386: remove gen_io_end
Makefile: simplify MINIKCONF rules
...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

+1615 -441
+4
Kconfig
··· 1 + source Kconfig.host 2 + source backends/Kconfig 3 + source accel/Kconfig 4 + source hw/Kconfig
-7
Kconfig.host
··· 2 2 # down to Kconfig. See also MINIKCONF_ARGS in the Makefile: 3 3 # these two need to be kept in sync. 4 4 5 - config KVM 6 - bool 7 - 8 5 config LINUX 9 6 bool 10 7 ··· 30 27 config VHOST_KERNEL 31 28 bool 32 29 select VHOST 33 - 34 - config XEN 35 - bool 36 - select FSDEV_9P if VIRTFS 37 30 38 31 config VIRTFS 39 32 bool
+22 -7
MAINTAINERS
··· 115 115 M: Richard Henderson <rth@twiddle.net> 116 116 R: Paolo Bonzini <pbonzini@redhat.com> 117 117 S: Maintained 118 - F: cpus.c 118 + F: softmmu/cpus.c 119 119 F: cpus-common.c 120 120 F: exec.c 121 121 F: accel/tcg/ ··· 362 362 M: Paolo Bonzini <pbonzini@redhat.com> 363 363 L: kvm@vger.kernel.org 364 364 S: Supported 365 - F: */kvm.* 365 + F: */*/kvm* 366 366 F: accel/kvm/ 367 367 F: accel/stubs/kvm-stub.c 368 368 F: include/hw/kvm/ ··· 416 416 F: target/i386/kvm.c 417 417 F: scripts/kvm/vmxcap 418 418 419 + Guest CPU Cores (other accelerators) 420 + ------------------------------------ 421 + Overall 422 + M: Richard Henderson <rth@twiddle.net> 423 + R: Paolo Bonzini <pbonzini@redhat.com> 424 + S: Maintained 425 + F: include/sysemu/accel.h 426 + F: accel/accel.c 427 + F: accel/Makefile.objs 428 + F: accel/stubs/Makefile.objs 429 + 419 430 X86 HVF CPUs 431 + M: Cameron Esfahani <dirty@apple.com> 420 432 M: Roman Bolshakov <r.bolshakov@yadro.com> 433 + W: https://wiki.qemu.org/Features/HVF 421 434 S: Maintained 422 435 F: accel/stubs/hvf-stub.c 423 436 F: target/i386/hvf/ ··· 465 478 L: haxm-team@intel.com 466 479 W: https://github.com/intel/haxm/issues 467 480 S: Maintained 481 + F: accel/stubs/hax-stub.c 468 482 F: include/sysemu/hax.h 469 483 F: target/i386/hax-* 470 484 ··· 1710 1724 S: Maintained 1711 1725 F: hw/virtio/virtio-balloon*.c 1712 1726 F: include/hw/virtio/virtio-balloon.h 1713 - F: balloon.c 1727 + F: softmmu/balloon.c 1714 1728 F: include/sysemu/balloon.h 1715 1729 1716 1730 virtio-9p ··· 2189 2203 M: Paolo Bonzini <pbonzini@redhat.com> 2190 2204 S: Supported 2191 2205 F: include/exec/ioport.h 2192 - F: ioport.c 2193 2206 F: include/exec/memop.h 2194 2207 F: include/exec/memory.h 2195 2208 F: include/exec/ram_addr.h 2196 2209 F: include/exec/ramblock.h 2197 - F: memory.c 2210 + F: softmmu/ioport.c 2211 + F: softmmu/memory.c 2198 2212 F: include/exec/memory-internal.h 2199 2213 F: exec.c 2200 2214 F: scripts/coccinelle/memory-region-housekeeping.cocci ··· 2226 2240 Main loop 2227 2241 M: Paolo Bonzini <pbonzini@redhat.com> 2228 2242 S: Maintained 2229 - F: cpus.c 2230 2243 F: include/qemu/main-loop.h 2231 2244 F: include/sysemu/runstate.h 2232 2245 F: util/main-loop.c 2233 2246 F: util/qemu-timer.c 2234 2247 F: softmmu/vl.c 2235 2248 F: softmmu/main.c 2249 + F: softmmu/cpus.c 2250 + F: softmmu/cpu-throttle.c 2236 2251 F: qapi/run-state.json 2237 2252 2238 2253 Human Monitor (HMP) ··· 2387 2402 M: Laurent Vivier <lvivier@redhat.com> 2388 2403 R: Paolo Bonzini <pbonzini@redhat.com> 2389 2404 S: Maintained 2390 - F: qtest.c 2405 + F: softmmu/qtest.c 2391 2406 F: accel/qtest.c 2392 2407 F: tests/qtest/ 2393 2408 X: tests/qtest/bios-tables-test-allowed-diff.h
+6 -6
Makefile
··· 404 404 # This has to be kept in sync with Kconfig.host. 405 405 MINIKCONF_ARGS = \ 406 406 $(CONFIG_MINIKCONF_MODE) \ 407 - $@ $*/config-devices.mak.d $< $(MINIKCONF_INPUTS) \ 407 + $@ $*/config-devices.mak.d $< $(SRC_PATH)/Kconfig \ 408 + CONFIG_TCG=$(CONFIG_TCG) \ 408 409 CONFIG_KVM=$(CONFIG_KVM) \ 409 410 CONFIG_SPICE=$(CONFIG_SPICE) \ 410 411 CONFIG_IVSHMEM=$(CONFIG_IVSHMEM) \ ··· 418 419 CONFIG_LINUX=$(CONFIG_LINUX) \ 419 420 CONFIG_PVRDMA=$(CONFIG_PVRDMA) 420 421 421 - MINIKCONF_INPUTS = $(SRC_PATH)/Kconfig.host $(SRC_PATH)/backends/Kconfig $(SRC_PATH)/hw/Kconfig 422 - MINIKCONF_DEPS = $(MINIKCONF_INPUTS) $(wildcard $(SRC_PATH)/hw/*/Kconfig) 423 - MINIKCONF = $(PYTHON) $(SRC_PATH)/scripts/minikconf.py \ 422 + MINIKCONF = $(PYTHON) $(SRC_PATH)/scripts/minikconf.py 424 423 425 - $(SUBDIR_DEVICES_MAK): %/config-devices.mak: default-configs/%.mak $(MINIKCONF_DEPS) $(BUILD_DIR)/config-host.mak 426 - $(call quiet-command, $(MINIKCONF) $(MINIKCONF_ARGS) > $@.tmp, "GEN", "$@.tmp") 424 + $(SUBDIR_DEVICES_MAK): %/config-devices.mak: default-configs/%.mak $(SRC_PATH)/Kconfig $(BUILD_DIR)/config-host.mak 425 + $(call quiet-command, $(MINIKCONF) $(MINIKCONF_ARGS) \ 426 + > $@.tmp, "GEN", "$@.tmp") 427 427 $(call quiet-command, if test -f $@; then \ 428 428 if cmp -s $@.old $@; then \ 429 429 mv $@.tmp $@; \
+2 -5
Makefile.target
··· 152 152 ######################################################### 153 153 # System emulator target 154 154 ifdef CONFIG_SOFTMMU 155 - obj-y += arch_init.o cpus.o gdbstub.o balloon.o ioport.o 156 - obj-y += qtest.o 155 + obj-y += softmmu/ 156 + obj-y += gdbstub.o 157 157 obj-y += dump/ 158 158 obj-y += hw/ 159 159 obj-y += monitor/ 160 160 obj-y += qapi/ 161 - obj-y += memory.o 162 - obj-y += memory_mapping.o 163 161 obj-y += migration/ram.o 164 - obj-y += softmmu/ 165 162 LIBS := $(libs_softmmu) $(LIBS) 166 163 167 164 # Hardware support
+9
accel/Kconfig
··· 1 + config TCG 2 + bool 3 + 4 + config KVM 5 + bool 6 + 7 + config XEN 8 + bool 9 + select FSDEV_9P if VIRTFS
+7
accel/stubs/tcg-stub.c
··· 22 22 void tlb_set_dirty(CPUState *cpu, target_ulong vaddr) 23 23 { 24 24 } 25 + 26 + void *probe_access(CPUArchState *env, target_ulong addr, int size, 27 + MMUAccessType access_type, int mmu_idx, uintptr_t retaddr) 28 + { 29 + /* Handled by hardware accelerator. */ 30 + g_assert_not_reached(); 31 + }
arch_init.c softmmu/arch_init.c
balloon.c softmmu/balloon.c
+12 -10
block/iscsi.c
··· 241 241 242 242 iTask->status = status; 243 243 iTask->do_retry = 0; 244 + iTask->err_code = 0; 244 245 iTask->task = task; 245 246 246 247 if (status != SCSI_STATUS_GOOD) { 248 + iTask->err_code = -EIO; 247 249 if (iTask->retries++ < ISCSI_CMD_RETRIES) { 248 250 if (status == SCSI_STATUS_BUSY || 249 251 status == SCSI_STATUS_TIMEOUT || ··· 266 268 timer_mod(&iTask->retry_timer, 267 269 qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + retry_time); 268 270 iTask->do_retry = 1; 269 - } 270 - } else if (status == SCSI_STATUS_CHECK_CONDITION) { 271 - int error = iscsi_translate_sense(&task->sense); 272 - if (error == EAGAIN) { 273 - error_report("iSCSI CheckCondition: %s", 274 - iscsi_get_error(iscsi)); 275 - iTask->do_retry = 1; 276 - } else { 277 - iTask->err_code = -error; 278 - iTask->err_str = g_strdup(iscsi_get_error(iscsi)); 271 + } else if (status == SCSI_STATUS_CHECK_CONDITION) { 272 + int error = iscsi_translate_sense(&task->sense); 273 + if (error == EAGAIN) { 274 + error_report("iSCSI CheckCondition: %s", 275 + iscsi_get_error(iscsi)); 276 + iTask->do_retry = 1; 277 + } else { 278 + iTask->err_code = -error; 279 + iTask->err_str = g_strdup(iscsi_get_error(iscsi)); 280 + } 279 281 } 280 282 } 281 283 }
+18
cpus-common.c
··· 72 72 return max_cpu_index; 73 73 } 74 74 75 + CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus); 76 + 75 77 void cpu_list_add(CPUState *cpu) 76 78 { 77 79 QEMU_LOCK_GUARD(&qemu_cpu_list_lock); ··· 95 97 QTAILQ_REMOVE_RCU(&cpus, cpu, node); 96 98 cpu->cpu_index = UNASSIGNED_CPU_INDEX; 97 99 } 100 + 101 + CPUState *qemu_get_cpu(int index) 102 + { 103 + CPUState *cpu; 104 + 105 + CPU_FOREACH(cpu) { 106 + if (cpu->cpu_index == index) { 107 + return cpu; 108 + } 109 + } 110 + 111 + return NULL; 112 + } 113 + 114 + /* current CPU in the current thread. It is only valid inside cpu_exec() */ 115 + __thread CPUState *current_cpu; 98 116 99 117 struct qemu_work_item { 100 118 QSIMPLEQ_ENTRY(qemu_work_item) node;
+8 -99
cpus.c softmmu/cpus.c
··· 61 61 #include "hw/boards.h" 62 62 #include "hw/hw.h" 63 63 64 + #include "sysemu/cpu-throttle.h" 65 + 64 66 #ifdef CONFIG_LINUX 65 67 66 68 #include <sys/prctl.h> ··· 83 85 84 86 int64_t max_delay; 85 87 int64_t max_advance; 86 - 87 - /* vcpu throttling controls */ 88 - static QEMUTimer *throttle_timer; 89 - static unsigned int throttle_percentage; 90 - 91 - #define CPU_THROTTLE_PCT_MIN 1 92 - #define CPU_THROTTLE_PCT_MAX 99 93 - #define CPU_THROTTLE_TIMESLICE_NS 10000000 94 88 95 89 bool cpu_is_stopped(CPUState *cpu) 96 90 { ··· 738 732 } 739 733 }; 740 734 741 - static void cpu_throttle_thread(CPUState *cpu, run_on_cpu_data opaque) 742 - { 743 - double pct; 744 - double throttle_ratio; 745 - int64_t sleeptime_ns, endtime_ns; 746 - 747 - if (!cpu_throttle_get_percentage()) { 748 - return; 749 - } 750 - 751 - pct = (double)cpu_throttle_get_percentage()/100; 752 - throttle_ratio = pct / (1 - pct); 753 - /* Add 1ns to fix double's rounding error (like 0.9999999...) */ 754 - sleeptime_ns = (int64_t)(throttle_ratio * CPU_THROTTLE_TIMESLICE_NS + 1); 755 - endtime_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + sleeptime_ns; 756 - while (sleeptime_ns > 0 && !cpu->stop) { 757 - if (sleeptime_ns > SCALE_MS) { 758 - qemu_cond_timedwait(cpu->halt_cond, &qemu_global_mutex, 759 - sleeptime_ns / SCALE_MS); 760 - } else { 761 - qemu_mutex_unlock_iothread(); 762 - g_usleep(sleeptime_ns / SCALE_US); 763 - qemu_mutex_lock_iothread(); 764 - } 765 - sleeptime_ns = endtime_ns - qemu_clock_get_ns(QEMU_CLOCK_REALTIME); 766 - } 767 - atomic_set(&cpu->throttle_thread_scheduled, 0); 768 - } 769 - 770 - static void cpu_throttle_timer_tick(void *opaque) 771 - { 772 - CPUState *cpu; 773 - double pct; 774 - 775 - /* Stop the timer if needed */ 776 - if (!cpu_throttle_get_percentage()) { 777 - return; 778 - } 779 - CPU_FOREACH(cpu) { 780 - if (!atomic_xchg(&cpu->throttle_thread_scheduled, 1)) { 781 - async_run_on_cpu(cpu, cpu_throttle_thread, 782 - RUN_ON_CPU_NULL); 783 - } 784 - } 785 - 786 - pct = (double)cpu_throttle_get_percentage()/100; 787 - timer_mod(throttle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT) + 788 - CPU_THROTTLE_TIMESLICE_NS / (1-pct)); 789 - } 790 - 791 - void cpu_throttle_set(int new_throttle_pct) 792 - { 793 - /* Ensure throttle percentage is within valid range */ 794 - new_throttle_pct = MIN(new_throttle_pct, CPU_THROTTLE_PCT_MAX); 795 - new_throttle_pct = MAX(new_throttle_pct, CPU_THROTTLE_PCT_MIN); 796 - 797 - atomic_set(&throttle_percentage, new_throttle_pct); 798 - 799 - timer_mod(throttle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT) + 800 - CPU_THROTTLE_TIMESLICE_NS); 801 - } 802 - 803 - void cpu_throttle_stop(void) 804 - { 805 - atomic_set(&throttle_percentage, 0); 806 - } 807 - 808 - bool cpu_throttle_active(void) 809 - { 810 - return (cpu_throttle_get_percentage() != 0); 811 - } 812 - 813 - int cpu_throttle_get_percentage(void) 814 - { 815 - return atomic_read(&throttle_percentage); 816 - } 817 - 818 735 void cpu_ticks_init(void) 819 736 { 820 737 seqlock_init(&timers_state.vm_clock_seqlock); 821 738 qemu_spin_init(&timers_state.vm_clock_lock); 822 739 vmstate_register(NULL, 0, &vmstate_timers, &timers_state); 823 - throttle_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL_RT, 824 - cpu_throttle_timer_tick, NULL); 740 + cpu_throttle_init(); 825 741 } 826 742 827 743 void configure_icount(QemuOpts *opts, Error **errp) ··· 1017 933 1018 934 CPU_FOREACH(cpu) { 1019 935 cpu_synchronize_state(cpu); 1020 - /* TODO: move to cpu_synchronize_state() */ 1021 - if (hvf_enabled()) { 1022 - hvf_cpu_synchronize_state(cpu); 1023 - } 1024 936 } 1025 937 } 1026 938 ··· 1030 942 1031 943 CPU_FOREACH(cpu) { 1032 944 cpu_synchronize_post_reset(cpu); 1033 - /* TODO: move to cpu_synchronize_post_reset() */ 1034 - if (hvf_enabled()) { 1035 - hvf_cpu_synchronize_post_reset(cpu); 1036 - } 1037 945 } 1038 946 } 1039 947 ··· 1043 951 1044 952 CPU_FOREACH(cpu) { 1045 953 cpu_synchronize_post_init(cpu); 1046 - /* TODO: move to cpu_synchronize_post_init() */ 1047 - if (hvf_enabled()) { 1048 - hvf_cpu_synchronize_post_init(cpu); 1049 - } 1050 954 } 1051 955 } 1052 956 ··· 1889 1793 void qemu_cond_wait_iothread(QemuCond *cond) 1890 1794 { 1891 1795 qemu_cond_wait(cond, &qemu_global_mutex); 1796 + } 1797 + 1798 + void qemu_cond_timedwait_iothread(QemuCond *cond, int ms) 1799 + { 1800 + qemu_cond_timedwait(cond, &qemu_global_mutex, ms); 1892 1801 } 1893 1802 1894 1803 static bool all_vcpus_paused(void)
-22
exec.c
··· 98 98 static MemoryRegion io_mem_unassigned; 99 99 #endif 100 100 101 - CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus); 102 - 103 - /* current CPU in the current thread. It is only valid inside 104 - cpu_exec() */ 105 - __thread CPUState *current_cpu; 106 - 107 101 uintptr_t qemu_host_page_size; 108 102 intptr_t qemu_host_page_mask; 109 103 ··· 832 826 } 833 827 }; 834 828 835 - #endif 836 - 837 - CPUState *qemu_get_cpu(int index) 838 - { 839 - CPUState *cpu; 840 - 841 - CPU_FOREACH(cpu) { 842 - if (cpu->cpu_index == index) { 843 - return cpu; 844 - } 845 - } 846 - 847 - return NULL; 848 - } 849 - 850 - #if !defined(CONFIG_USER_ONLY) 851 829 void cpu_address_space_init(CPUState *cpu, int asidx, 852 830 const char *prefix, MemoryRegion *mr) 853 831 {
+5
hw/core/null-machine.c
··· 50 50 mc->max_cpus = 1; 51 51 mc->default_ram_size = 0; 52 52 mc->default_ram_id = "ram"; 53 + mc->no_serial = 1; 54 + mc->no_parallel = 1; 55 + mc->no_floppy = 1; 56 + mc->no_cdrom = 1; 57 + mc->no_sdcard = 1; 53 58 } 54 59 55 60 DEFINE_MACHINE("none", machine_none_machine_init)
+5
hw/i386/pc_sysfw.c
··· 93 93 object_property_add_child(OBJECT(pcms), name, OBJECT(dev)); 94 94 object_property_add_alias(OBJECT(pcms), alias_prop_name, 95 95 OBJECT(dev), "drive"); 96 + /* 97 + * The returned reference is tied to the child property and 98 + * will be removed with object_unparent. 99 + */ 100 + object_unref(OBJECT(dev)); 96 101 return PFLASH_CFI01(dev); 97 102 } 98 103
-18
hw/intc/apic.c
··· 615 615 return 0; 616 616 } 617 617 618 - static uint32_t apic_get_current_count(APICCommonState *s) 619 - { 620 - int64_t d; 621 - uint32_t val; 622 - d = (qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - s->initial_count_load_time) >> 623 - s->count_shift; 624 - if (s->lvt[APIC_LVT_TIMER] & APIC_LVT_TIMER_PERIODIC) { 625 - /* periodic */ 626 - val = s->initial_count - (d % ((uint64_t)s->initial_count + 1)); 627 - } else { 628 - if (d >= s->initial_count) 629 - val = 0; 630 - else 631 - val = s->initial_count - d; 632 - } 633 - return val; 634 - } 635 - 636 618 static void apic_timer_update(APICCommonState *s, int64_t current_time) 637 619 { 638 620 if (apic_next_timer(s, current_time)) {
+19
hw/intc/apic_common.c
··· 189 189 return true; 190 190 } 191 191 192 + uint32_t apic_get_current_count(APICCommonState *s) 193 + { 194 + int64_t d; 195 + uint32_t val; 196 + d = (qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - s->initial_count_load_time) >> 197 + s->count_shift; 198 + if (s->lvt[APIC_LVT_TIMER] & APIC_LVT_TIMER_PERIODIC) { 199 + /* periodic */ 200 + val = s->initial_count - (d % ((uint64_t)s->initial_count + 1)); 201 + } else { 202 + if (d >= s->initial_count) { 203 + val = 0; 204 + } else { 205 + val = s->initial_count - d; 206 + } 207 + } 208 + return val; 209 + } 210 + 192 211 void apic_init_reset(DeviceState *dev) 193 212 { 194 213 APICCommonState *s;
-37
include/hw/core/cpu.h
··· 822 822 */ 823 823 CPUState *cpu_by_arch_id(int64_t id); 824 824 825 - /** 826 - * cpu_throttle_set: 827 - * @new_throttle_pct: Percent of sleep time. Valid range is 1 to 99. 828 - * 829 - * Throttles all vcpus by forcing them to sleep for the given percentage of 830 - * time. A throttle_percentage of 25 corresponds to a 75% duty cycle roughly. 831 - * (example: 10ms sleep for every 30ms awake). 832 - * 833 - * cpu_throttle_set can be called as needed to adjust new_throttle_pct. 834 - * Once the throttling starts, it will remain in effect until cpu_throttle_stop 835 - * is called. 836 - */ 837 - void cpu_throttle_set(int new_throttle_pct); 838 - 839 - /** 840 - * cpu_throttle_stop: 841 - * 842 - * Stops the vcpu throttling started by cpu_throttle_set. 843 - */ 844 - void cpu_throttle_stop(void); 845 - 846 - /** 847 - * cpu_throttle_active: 848 - * 849 - * Returns: %true if the vcpus are currently being throttled, %false otherwise. 850 - */ 851 - bool cpu_throttle_active(void); 852 - 853 - /** 854 - * cpu_throttle_get_percentage: 855 - * 856 - * Returns the vcpu throttle percentage. See cpu_throttle_set for details. 857 - * 858 - * Returns: The throttle percentage in range 1 to 99. 859 - */ 860 - int cpu_throttle_get_percentage(void); 861 - 862 825 #ifndef CONFIG_USER_ONLY 863 826 864 827 typedef void (*CPUInterruptHandler)(CPUState *, int);
+1
include/hw/i386/apic_internal.h
··· 211 211 TPRAccess access); 212 212 213 213 int apic_get_ppr(APICCommonState *s); 214 + uint32_t apic_get_current_count(APICCommonState *s); 214 215 215 216 static inline void apic_set_bit(uint32_t *tab, int index) 216 217 {
+2
include/qemu/error-report.h
··· 75 75 const char *error_get_progname(void); 76 76 77 77 extern bool error_with_timestamp; 78 + extern bool error_with_guestname; 79 + extern const char *error_guest_name; 78 80 79 81 #endif
+5
include/qemu/main-loop.h
··· 303 303 */ 304 304 void qemu_cond_wait_iothread(QemuCond *cond); 305 305 306 + /* 307 + * qemu_cond_timedwait_iothread: like the previous, but with timeout 308 + */ 309 + void qemu_cond_timedwait_iothread(QemuCond *cond, int ms); 310 + 306 311 /* internal interfaces */ 307 312 308 313 void qemu_fd_register(int fd);
+14 -7
include/qemu/osdep.h
··· 250 250 * Note that neither form is usable as an #if condition; if you truly 251 251 * need to write conditional code that depends on a minimum or maximum 252 252 * determined by the pre-processor instead of the compiler, you'll 253 - * have to open-code it. 253 + * have to open-code it. Sadly, Coverity is severely confused by the 254 + * constant variants, so we have to dumb things down there. 254 255 */ 255 256 #undef MIN 256 257 #define MIN(a, b) \ ··· 258 259 typeof(1 ? (a) : (b)) _a = (a), _b = (b); \ 259 260 _a < _b ? _a : _b; \ 260 261 }) 261 - #define MIN_CONST(a, b) \ 262 - __builtin_choose_expr( \ 263 - __builtin_constant_p(a) && __builtin_constant_p(b), \ 264 - (a) < (b) ? (a) : (b), \ 265 - ((void)0)) 266 262 #undef MAX 267 263 #define MAX(a, b) \ 268 264 ({ \ 269 265 typeof(1 ? (a) : (b)) _a = (a), _b = (b); \ 270 266 _a > _b ? _a : _b; \ 271 267 }) 272 - #define MAX_CONST(a, b) \ 268 + 269 + #ifdef __COVERITY__ 270 + # define MIN_CONST(a, b) ((a) < (b) ? (a) : (b)) 271 + # define MAX_CONST(a, b) ((a) > (b) ? (a) : (b)) 272 + #else 273 + # define MIN_CONST(a, b) \ 274 + __builtin_choose_expr( \ 275 + __builtin_constant_p(a) && __builtin_constant_p(b), \ 276 + (a) < (b) ? (a) : (b), \ 277 + ((void)0)) 278 + # define MAX_CONST(a, b) \ 273 279 __builtin_choose_expr( \ 274 280 __builtin_constant_p(a) && __builtin_constant_p(b), \ 275 281 (a) > (b) ? (a) : (b), \ 276 282 ((void)0)) 283 + #endif 277 284 278 285 /* 279 286 * Minimum function that returns zero only if both values are zero.
+24 -2
include/qom/object.h
··· 1047 1047 void object_unref(Object *obj); 1048 1048 1049 1049 /** 1050 - * object_property_add: 1050 + * object_property_try_add: 1051 1051 * @obj: the object to add a property to 1052 1052 * @name: the name of the property. This can contain any character except for 1053 1053 * a forward slash. In general, you should use hyphens '-' instead of ··· 1064 1064 * meant to allow a property to free its opaque upon object 1065 1065 * destruction. This may be NULL. 1066 1066 * @opaque: an opaque pointer to pass to the callbacks for the property 1067 + * @errp: pointer to error object 1067 1068 * 1068 1069 * Returns: The #ObjectProperty; this can be used to set the @resolve 1069 1070 * callback for child and link properties. 1071 + */ 1072 + ObjectProperty *object_property_try_add(Object *obj, const char *name, 1073 + const char *type, 1074 + ObjectPropertyAccessor *get, 1075 + ObjectPropertyAccessor *set, 1076 + ObjectPropertyRelease *release, 1077 + void *opaque, Error **errp); 1078 + 1079 + /** 1080 + * object_property_add: 1081 + * Same as object_property_try_add() with @errp hardcoded to 1082 + * &error_abort. 1070 1083 */ 1071 1084 ObjectProperty *object_property_add(Object *obj, const char *name, 1072 1085 const char *type, ··· 1518 1531 Object *object_resolve_path_component(Object *parent, const char *part); 1519 1532 1520 1533 /** 1521 - * object_property_add_child: 1534 + * object_property_try_add_child: 1522 1535 * @obj: the object to add a property to 1523 1536 * @name: the name of the property 1524 1537 * @child: the child object 1538 + * @errp: pointer to error object 1525 1539 * 1526 1540 * Child properties form the composition tree. All objects need to be a child 1527 1541 * of another object. Objects can only be a child of one object. ··· 1534 1548 * The child object itself can be retrieved using object_property_get_link(). 1535 1549 * 1536 1550 * Returns: The newly added property on success, or %NULL on failure. 1551 + */ 1552 + ObjectProperty *object_property_try_add_child(Object *obj, const char *name, 1553 + Object *child, Error **errp); 1554 + 1555 + /** 1556 + * object_property_add_child: 1557 + * Same as object_property_try_add_child() with @errp hardcoded to 1558 + * &error_abort 1537 1559 */ 1538 1560 ObjectProperty *object_property_add_child(Object *obj, const char *name, 1539 1561 Object *child);
+68
include/sysemu/cpu-throttle.h
··· 1 + /* 2 + * Copyright (c) 2012 SUSE LINUX Products GmbH 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License 6 + * as published by the Free Software Foundation; either version 2 7 + * of the License, or (at your option) any later version. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program; if not, see 16 + * <http://www.gnu.org/licenses/gpl-2.0.html> 17 + */ 18 + 19 + #ifndef SYSEMU_CPU_THROTTLE_H 20 + #define SYSEMU_CPU_THROTTLE_H 21 + 22 + #include "qemu/timer.h" 23 + 24 + /** 25 + * cpu_throttle_init: 26 + * 27 + * Initialize the CPU throttling API. 28 + */ 29 + void cpu_throttle_init(void); 30 + 31 + /** 32 + * cpu_throttle_set: 33 + * @new_throttle_pct: Percent of sleep time. Valid range is 1 to 99. 34 + * 35 + * Throttles all vcpus by forcing them to sleep for the given percentage of 36 + * time. A throttle_percentage of 25 corresponds to a 75% duty cycle roughly. 37 + * (example: 10ms sleep for every 30ms awake). 38 + * 39 + * cpu_throttle_set can be called as needed to adjust new_throttle_pct. 40 + * Once the throttling starts, it will remain in effect until cpu_throttle_stop 41 + * is called. 42 + */ 43 + void cpu_throttle_set(int new_throttle_pct); 44 + 45 + /** 46 + * cpu_throttle_stop: 47 + * 48 + * Stops the vcpu throttling started by cpu_throttle_set. 49 + */ 50 + void cpu_throttle_stop(void); 51 + 52 + /** 53 + * cpu_throttle_active: 54 + * 55 + * Returns: %true if the vcpus are currently being throttled, %false otherwise. 56 + */ 57 + bool cpu_throttle_active(void); 58 + 59 + /** 60 + * cpu_throttle_get_percentage: 61 + * 62 + * Returns the vcpu throttle percentage. See cpu_throttle_set for details. 63 + * 64 + * Returns: The throttle percentage in range 1 to 99. 65 + */ 66 + int cpu_throttle_get_percentage(void); 67 + 68 + #endif /* SYSEMU_CPU_THROTTLE_H */
+1 -1
include/sysemu/hvf.h
··· 28 28 void hvf_cpu_synchronize_state(CPUState *); 29 29 void hvf_cpu_synchronize_post_reset(CPUState *); 30 30 void hvf_cpu_synchronize_post_init(CPUState *); 31 + void hvf_cpu_synchronize_pre_loadvm(CPUState *); 31 32 void hvf_vcpu_destroy(CPUState *); 32 - void hvf_reset_vcpu(CPUState *); 33 33 34 34 #define TYPE_HVF_ACCEL ACCEL_CLASS_NAME("hvf") 35 35
+13
include/sysemu/hw_accel.h
··· 14 14 #include "hw/core/cpu.h" 15 15 #include "sysemu/hax.h" 16 16 #include "sysemu/kvm.h" 17 + #include "sysemu/hvf.h" 17 18 #include "sysemu/whpx.h" 18 19 19 20 static inline void cpu_synchronize_state(CPUState *cpu) ··· 23 24 } 24 25 if (hax_enabled()) { 25 26 hax_cpu_synchronize_state(cpu); 27 + } 28 + if (hvf_enabled()) { 29 + hvf_cpu_synchronize_state(cpu); 26 30 } 27 31 if (whpx_enabled()) { 28 32 whpx_cpu_synchronize_state(cpu); ··· 36 40 } 37 41 if (hax_enabled()) { 38 42 hax_cpu_synchronize_post_reset(cpu); 43 + } 44 + if (hvf_enabled()) { 45 + hvf_cpu_synchronize_post_reset(cpu); 39 46 } 40 47 if (whpx_enabled()) { 41 48 whpx_cpu_synchronize_post_reset(cpu); ··· 50 57 if (hax_enabled()) { 51 58 hax_cpu_synchronize_post_init(cpu); 52 59 } 60 + if (hvf_enabled()) { 61 + hvf_cpu_synchronize_post_init(cpu); 62 + } 53 63 if (whpx_enabled()) { 54 64 whpx_cpu_synchronize_post_init(cpu); 55 65 } ··· 62 72 } 63 73 if (hax_enabled()) { 64 74 hax_cpu_synchronize_pre_loadvm(cpu); 75 + } 76 + if (hvf_enabled()) { 77 + hvf_cpu_synchronize_pre_loadvm(cpu); 65 78 } 66 79 if (whpx_enabled()) { 67 80 whpx_cpu_synchronize_pre_loadvm(cpu);
ioport.c softmmu/ioport.c
+1
linux-headers/asm-arm/unistd-common.h
··· 392 392 #define __NR_clone3 (__NR_SYSCALL_BASE + 435) 393 393 #define __NR_openat2 (__NR_SYSCALL_BASE + 437) 394 394 #define __NR_pidfd_getfd (__NR_SYSCALL_BASE + 438) 395 + #define __NR_faccessat2 (__NR_SYSCALL_BASE + 439) 395 396 396 397 #endif /* _ASM_ARM_UNISTD_COMMON_H */
+3 -2
linux-headers/asm-x86/kvm.h
··· 408 408 }; 409 409 410 410 struct kvm_vmx_nested_state_hdr { 411 - __u32 flags; 412 411 __u64 vmxon_pa; 413 412 __u64 vmcs12_pa; 414 - __u64 preemption_timer_deadline; 415 413 416 414 struct { 417 415 __u16 flags; 418 416 } smm; 417 + 418 + __u32 flags; 419 + __u64 preemption_timer_deadline; 419 420 }; 420 421 421 422 struct kvm_svm_nested_state_data {
memory.c softmmu/memory.c
memory_mapping.c softmmu/memory_mapping.c
+1
migration/migration.c
··· 23 23 #include "socket.h" 24 24 #include "sysemu/runstate.h" 25 25 #include "sysemu/sysemu.h" 26 + #include "sysemu/cpu-throttle.h" 26 27 #include "rdma.h" 27 28 #include "ram.h" 28 29 #include "migration/global_state.h"
+1
migration/ram.c
··· 52 52 #include "migration/colo.h" 53 53 #include "block.h" 54 54 #include "sysemu/sysemu.h" 55 + #include "sysemu/cpu-throttle.h" 55 56 #include "savevm.h" 56 57 #include "qemu/iov.h" 57 58 #include "multifd.h"
+9 -3
qemu-options.hx
··· 4303 4303 DEF("no-kvm", 0, QEMU_OPTION_no_kvm, "", QEMU_ARCH_I386) 4304 4304 4305 4305 DEF("msg", HAS_ARG, QEMU_OPTION_msg, 4306 - "-msg timestamp[=on|off]\n" 4306 + "-msg [timestamp[=on|off]][,guest-name=[on|off]]\n" 4307 4307 " control error message format\n" 4308 - " timestamp=on enables timestamps (default: off)\n", 4308 + " timestamp=on enables timestamps (default: off)\n" 4309 + " guest-name=on enables guest name prefix but only if\n" 4310 + " -name guest option is set (default: off)\n", 4309 4311 QEMU_ARCH_ALL) 4310 4312 SRST 4311 - ``-msg timestamp[=on|off]`` 4313 + ``-msg [timestamp[=on|off]][,guest-name[=on|off]]`` 4312 4314 Control error message format. 4313 4315 4314 4316 ``timestamp=on|off`` 4315 4317 Prefix messages with a timestamp. Default is off. 4318 + 4319 + ``guest-name=on|off`` 4320 + Prefix messages with guest name but only if -name guest option is set 4321 + otherwise the option is ignored. Default is off. 4316 4322 ERST 4317 4323 4318 4324 DEF("dump-vmstate", HAS_ARG, QEMU_OPTION_dump_vmstate,
+16 -5
qom/object.c
··· 1146 1146 } 1147 1147 } 1148 1148 1149 - static ObjectProperty * 1149 + ObjectProperty * 1150 1150 object_property_try_add(Object *obj, const char *name, const char *type, 1151 1151 ObjectPropertyAccessor *get, 1152 1152 ObjectPropertyAccessor *set, ··· 1675 1675 } 1676 1676 1677 1677 ObjectProperty * 1678 - object_property_add_child(Object *obj, const char *name, 1679 - Object *child) 1678 + object_property_try_add_child(Object *obj, const char *name, 1679 + Object *child, Error **errp) 1680 1680 { 1681 1681 g_autofree char *type = NULL; 1682 1682 ObjectProperty *op; ··· 1685 1685 1686 1686 type = g_strdup_printf("child<%s>", object_get_typename(child)); 1687 1687 1688 - op = object_property_add(obj, name, type, object_get_child_property, NULL, 1689 - object_finalize_child_property, child); 1688 + op = object_property_try_add(obj, name, type, object_get_child_property, 1689 + NULL, object_finalize_child_property, 1690 + child, errp); 1691 + if (!op) { 1692 + return NULL; 1693 + } 1690 1694 op->resolve = object_resolve_child_property; 1691 1695 object_ref(child); 1692 1696 child->parent = obj; 1693 1697 return op; 1698 + } 1699 + 1700 + ObjectProperty * 1701 + object_property_add_child(Object *obj, const char *name, 1702 + Object *child) 1703 + { 1704 + return object_property_try_add_child(obj, name, child, &error_abort); 1694 1705 } 1695 1706 1696 1707 void object_property_allow_set_link(const Object *obj, const char *name,
+5 -2
qom/object_interfaces.c
··· 83 83 } 84 84 85 85 if (id != NULL) { 86 - object_property_add_child(object_get_objects_root(), 87 - id, obj); 86 + object_property_try_add_child(object_get_objects_root(), 87 + id, obj, &local_err); 88 + if (local_err) { 89 + goto out; 90 + } 88 91 } 89 92 90 93 if (!user_creatable_complete(USER_CREATABLE(obj), &local_err)) {
qtest.c softmmu/qtest.c
+3 -3
scripts/checkpatch.pl
··· 49 49 50 50 Options: 51 51 -q, --quiet quiet 52 - --no-tree run without a kernel tree 52 + --no-tree run without a qemu tree 53 53 --no-signoff do not check for 'Signed-off-by' line 54 54 --patch treat FILE as patchfile 55 55 --branch treat args as GIT revision list ··· 57 57 --terse one line per report 58 58 -f, --file treat FILE as regular source file 59 59 --strict fail if only warnings are found 60 - --root=PATH PATH to the kernel tree root 60 + --root=PATH PATH to the qemu tree root 61 61 --no-summary suppress the per-file summary 62 62 --mailback only produce a report in case of warnings/errors 63 63 --summary-file include the filename in summary ··· 203 203 } 204 204 205 205 if (!defined $root) { 206 - print "Must be run from the top-level dir. of a kernel tree\n"; 206 + print "Must be run from the top-level dir. of a qemu tree\n"; 207 207 exit(2); 208 208 } 209 209 }
+1 -1
scripts/tap-driver.pl
··· 217 217 218 218 sub testsuite_error ($) 219 219 { 220 - report "ERROR", "- $_[0]"; 220 + report "ERROR", "$test_name - $_[0]"; 221 221 } 222 222 223 223 sub handle_tap_result ($)
+11
softmmu/Makefile.objs
··· 1 1 softmmu-main-y = softmmu/main.o 2 + 3 + obj-y += arch_init.o 4 + obj-y += cpus.o 5 + obj-y += cpu-throttle.o 6 + obj-y += balloon.o 7 + obj-y += ioport.o 8 + obj-y += memory.o 9 + obj-y += memory_mapping.o 10 + 11 + obj-y += qtest.o 12 + 2 13 obj-y += vl.o 3 14 vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
+122
softmmu/cpu-throttle.c
··· 1 + /* 2 + * QEMU System Emulator 3 + * 4 + * Copyright (c) 2003-2008 Fabrice Bellard 5 + * 6 + * Permission is hereby granted, free of charge, to any person obtaining a copy 7 + * of this software and associated documentation files (the "Software"), to deal 8 + * in the Software without restriction, including without limitation the rights 9 + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 + * copies of the Software, and to permit persons to whom the Software is 11 + * furnished to do so, subject to the following conditions: 12 + * 13 + * The above copyright notice and this permission notice shall be included in 14 + * all copies or substantial portions of the Software. 15 + * 16 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 19 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 22 + * THE SOFTWARE. 23 + */ 24 + 25 + #include "qemu/osdep.h" 26 + #include "qemu-common.h" 27 + #include "qemu/thread.h" 28 + #include "hw/core/cpu.h" 29 + #include "qemu/main-loop.h" 30 + #include "sysemu/cpus.h" 31 + #include "sysemu/cpu-throttle.h" 32 + 33 + /* vcpu throttling controls */ 34 + static QEMUTimer *throttle_timer; 35 + static unsigned int throttle_percentage; 36 + 37 + #define CPU_THROTTLE_PCT_MIN 1 38 + #define CPU_THROTTLE_PCT_MAX 99 39 + #define CPU_THROTTLE_TIMESLICE_NS 10000000 40 + 41 + static void cpu_throttle_thread(CPUState *cpu, run_on_cpu_data opaque) 42 + { 43 + double pct; 44 + double throttle_ratio; 45 + int64_t sleeptime_ns, endtime_ns; 46 + 47 + if (!cpu_throttle_get_percentage()) { 48 + return; 49 + } 50 + 51 + pct = (double)cpu_throttle_get_percentage() / 100; 52 + throttle_ratio = pct / (1 - pct); 53 + /* Add 1ns to fix double's rounding error (like 0.9999999...) */ 54 + sleeptime_ns = (int64_t)(throttle_ratio * CPU_THROTTLE_TIMESLICE_NS + 1); 55 + endtime_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + sleeptime_ns; 56 + while (sleeptime_ns > 0 && !cpu->stop) { 57 + if (sleeptime_ns > SCALE_MS) { 58 + qemu_cond_timedwait_iothread(cpu->halt_cond, 59 + sleeptime_ns / SCALE_MS); 60 + } else { 61 + qemu_mutex_unlock_iothread(); 62 + g_usleep(sleeptime_ns / SCALE_US); 63 + qemu_mutex_lock_iothread(); 64 + } 65 + sleeptime_ns = endtime_ns - qemu_clock_get_ns(QEMU_CLOCK_REALTIME); 66 + } 67 + atomic_set(&cpu->throttle_thread_scheduled, 0); 68 + } 69 + 70 + static void cpu_throttle_timer_tick(void *opaque) 71 + { 72 + CPUState *cpu; 73 + double pct; 74 + 75 + /* Stop the timer if needed */ 76 + if (!cpu_throttle_get_percentage()) { 77 + return; 78 + } 79 + CPU_FOREACH(cpu) { 80 + if (!atomic_xchg(&cpu->throttle_thread_scheduled, 1)) { 81 + async_run_on_cpu(cpu, cpu_throttle_thread, 82 + RUN_ON_CPU_NULL); 83 + } 84 + } 85 + 86 + pct = (double)cpu_throttle_get_percentage() / 100; 87 + timer_mod(throttle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT) + 88 + CPU_THROTTLE_TIMESLICE_NS / (1 - pct)); 89 + } 90 + 91 + void cpu_throttle_set(int new_throttle_pct) 92 + { 93 + /* Ensure throttle percentage is within valid range */ 94 + new_throttle_pct = MIN(new_throttle_pct, CPU_THROTTLE_PCT_MAX); 95 + new_throttle_pct = MAX(new_throttle_pct, CPU_THROTTLE_PCT_MIN); 96 + 97 + atomic_set(&throttle_percentage, new_throttle_pct); 98 + 99 + timer_mod(throttle_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT) + 100 + CPU_THROTTLE_TIMESLICE_NS); 101 + } 102 + 103 + void cpu_throttle_stop(void) 104 + { 105 + atomic_set(&throttle_percentage, 0); 106 + } 107 + 108 + bool cpu_throttle_active(void) 109 + { 110 + return (cpu_throttle_get_percentage() != 0); 111 + } 112 + 113 + int cpu_throttle_get_percentage(void) 114 + { 115 + return atomic_read(&throttle_percentage); 116 + } 117 + 118 + void cpu_throttle_init(void) 119 + { 120 + throttle_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL_RT, 121 + cpu_throttle_timer_tick, NULL); 122 + }
+9 -5
softmmu/vl.c
··· 389 389 .name = "timestamp", 390 390 .type = QEMU_OPT_BOOL, 391 391 }, 392 + { 393 + .name = "guest-name", 394 + .type = QEMU_OPT_BOOL, 395 + .help = "Prepends guest name for error messages but only if " 396 + "-name guest is set otherwise option is ignored\n", 397 + }, 392 398 { /* end of list */ } 393 399 }, 394 400 }; ··· 1114 1120 static void configure_msg(QemuOpts *opts) 1115 1121 { 1116 1122 error_with_timestamp = qemu_opt_get_bool(opts, "timestamp", false); 1123 + error_with_guestname = qemu_opt_get_bool(opts, "guest-name", false); 1117 1124 } 1118 1125 1119 1126 ··· 3499 3506 g_slist_free(accel_list); 3500 3507 exit(0); 3501 3508 } 3502 - if (optarg && strchr(optarg, ':')) { 3503 - error_report("Don't use ':' with -accel, " 3504 - "use -M accel=... for now instead"); 3505 - exit(1); 3506 - } 3507 3509 break; 3508 3510 case QEMU_OPTION_usb: 3509 3511 olist = qemu_find_opts("machine"); ··· 3592 3594 if (!opts) { 3593 3595 exit(1); 3594 3596 } 3597 + /* Capture guest name if -msg guest-name is used later */ 3598 + error_guest_name = qemu_opt_get(opts, "guest"); 3595 3599 break; 3596 3600 case QEMU_OPTION_prom_env: 3597 3601 if (nb_prom_envs >= MAX_PROM_ENVS) {
+1
target/i386/Makefile.objs
··· 3 3 obj-$(CONFIG_TCG) += bpt_helper.o cc_helper.o excp_helper.o fpu_helper.o 4 4 obj-$(CONFIG_TCG) += int_helper.o mem_helper.o misc_helper.o mpx_helper.o 5 5 obj-$(CONFIG_TCG) += seg_helper.o smm_helper.o svm_helper.o 6 + obj-$(call lnot,$(CONFIG_TCG)) += tcg-stub.o 6 7 obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o 7 8 ifeq ($(CONFIG_SOFTMMU),y) 8 9 obj-y += machine.o arch_memory_mapping.o arch_dump.o monitor.o
+7 -6
target/i386/cpu.c
··· 986 986 NULL, NULL, "avx512-4vnniw", "avx512-4fmaps", 987 987 NULL, NULL, NULL, NULL, 988 988 "avx512-vp2intersect", NULL, "md-clear", NULL, 989 - NULL, NULL, NULL, NULL, 990 - NULL, NULL, NULL /* pconfig */, NULL, 989 + NULL, NULL, "serialize", NULL, 990 + "tsx-ldtrk", NULL, NULL /* pconfig */, NULL, 991 991 NULL, NULL, NULL, NULL, 992 992 NULL, NULL, "spec-ctrl", "stibp", 993 993 NULL, "arch-capabilities", "core-capability", "ssbd", ··· 5968 5968 /* init to reset state */ 5969 5969 5970 5970 env->hflags2 |= HF2_GIF_MASK; 5971 + env->hflags &= ~HF_GUEST_MASK; 5971 5972 5972 5973 cpu_x86_update_cr0(env, 0x60000010); 5973 5974 env->a20_mask = ~0x0; ··· 6078 6079 6079 6080 if (kvm_enabled()) { 6080 6081 kvm_arch_reset_vcpu(cpu); 6081 - } 6082 - else if (hvf_enabled()) { 6083 - hvf_reset_vcpu(s); 6084 6082 } 6085 6083 #endif 6086 6084 } ··· 6400 6398 } else if (cpu->env.cpuid_min_level < 0x14) { 6401 6399 mark_unavailable_features(cpu, FEAT_7_0_EBX, 6402 6400 CPUID_7_0_EBX_INTEL_PT, 6403 - "Intel PT need CPUID leaf 0x14, please set by \"-cpu ...,+intel-pt,level=0x14\""); 6401 + "Intel PT need CPUID leaf 0x14, please set by \"-cpu ...,+intel-pt,min-level=0x14\""); 6404 6402 } 6405 6403 } 6406 6404 ··· 6511 6509 host_cpuid(5, 0, &cpu->mwait.eax, &cpu->mwait.ebx, 6512 6510 &cpu->mwait.ecx, &cpu->mwait.edx); 6513 6511 env->features[FEAT_1_ECX] |= CPUID_EXT_MONITOR; 6512 + if (kvm_enabled() && kvm_has_waitpkg()) { 6513 + env->features[FEAT_7_0_ECX] |= CPUID_7_0_ECX_WAITPKG; 6514 + } 6514 6515 } 6515 6516 if (kvm_enabled() && cpu->ucode_rev == 0) { 6516 6517 cpu->ucode_rev = kvm_arch_get_supported_msr_feature(kvm_state,
+10
target/i386/cpu.h
··· 777 777 #define CPUID_7_0_EDX_AVX512_4FMAPS (1U << 3) 778 778 /* AVX512 Vector Pair Intersection to a Pair of Mask Registers */ 779 779 #define CPUID_7_0_EDX_AVX512_VP2INTERSECT (1U << 8) 780 + /* SERIALIZE instruction */ 781 + #define CPUID_7_0_EDX_SERIALIZE (1U << 14) 782 + /* TSX Suspend Load Address Tracking instruction */ 783 + #define CPUID_7_0_EDX_TSX_LDTRK (1U << 16) 780 784 /* Speculation Control */ 781 785 #define CPUID_7_0_EDX_SPEC_CTRL (1U << 26) 782 786 /* Single Thread Indirect Branch Predictors */ ··· 2118 2122 return env->features[FEAT_1_ECX] & CPUID_EXT_VMX; 2119 2123 } 2120 2124 2125 + static inline bool cpu_has_svm(CPUX86State *env) 2126 + { 2127 + return env->features[FEAT_8000_0001_ECX] & CPUID_EXT3_SVM; 2128 + } 2129 + 2121 2130 /* 2122 2131 * In order for a vCPU to enter VMX operation it must have CR4.VMXE set. 2123 2132 * Since it was set, CR4.VMXE must remain set as long as vCPU is in ··· 2143 2152 /* fpu_helper.c */ 2144 2153 void update_fp_status(CPUX86State *env); 2145 2154 void update_mxcsr_status(CPUX86State *env); 2155 + void update_mxcsr_from_sse_status(CPUX86State *env); 2146 2156 2147 2157 static inline void cpu_set_mxcsr(CPUX86State *env, uint32_t mxcsr) 2148 2158 {
+2 -2
target/i386/excp_helper.c
··· 262 262 } 263 263 ptep = pde | PG_NX_MASK; 264 264 265 - /* if PSE bit is set, then we use a 4MB page */ 266 - if ((pde & PG_PSE_MASK) && (env->cr[4] & CR4_PSE_MASK)) { 265 + /* if host cr4 PSE bit is set, then we use a 4MB page */ 266 + if ((pde & PG_PSE_MASK) && (env->nested_pg_mode & SVM_NPT_PSE)) { 267 267 page_size = 4096 * 1024; 268 268 pte_addr = pde_addr; 269 269
+36 -1
target/i386/fpu_helper.c
··· 2539 2539 2540 2540 static void do_xsave_mxcsr(CPUX86State *env, target_ulong ptr, uintptr_t ra) 2541 2541 { 2542 + update_mxcsr_from_sse_status(env); 2542 2543 cpu_stl_data_ra(env, ptr + XO(legacy.mxcsr), env->mxcsr, ra); 2543 2544 cpu_stl_data_ra(env, ptr + XO(legacy.mxcsr_mask), 0x0000ffff, ra); 2544 2545 } ··· 2968 2969 } 2969 2970 set_float_rounding_mode(rnd_type, &env->sse_status); 2970 2971 2972 + /* Set exception flags. */ 2973 + set_float_exception_flags((mxcsr & FPUS_IE ? float_flag_invalid : 0) | 2974 + (mxcsr & FPUS_ZE ? float_flag_divbyzero : 0) | 2975 + (mxcsr & FPUS_OE ? float_flag_overflow : 0) | 2976 + (mxcsr & FPUS_UE ? float_flag_underflow : 0) | 2977 + (mxcsr & FPUS_PE ? float_flag_inexact : 0), 2978 + &env->sse_status); 2979 + 2971 2980 /* set denormals are zero */ 2972 2981 set_flush_inputs_to_zero((mxcsr & SSE_DAZ) ? 1 : 0, &env->sse_status); 2973 2982 2974 2983 /* set flush to zero */ 2975 - set_flush_to_zero((mxcsr & SSE_FZ) ? 1 : 0, &env->fp_status); 2984 + set_flush_to_zero((mxcsr & SSE_FZ) ? 1 : 0, &env->sse_status); 2985 + } 2986 + 2987 + void update_mxcsr_from_sse_status(CPUX86State *env) 2988 + { 2989 + if (tcg_enabled()) { 2990 + uint8_t flags = get_float_exception_flags(&env->sse_status); 2991 + /* 2992 + * The MXCSR denormal flag has opposite semantics to 2993 + * float_flag_input_denormal (the softfloat code sets that flag 2994 + * only when flushing input denormals to zero, but SSE sets it 2995 + * only when not flushing them to zero), so is not converted 2996 + * here. 2997 + */ 2998 + env->mxcsr |= ((flags & float_flag_invalid ? FPUS_IE : 0) | 2999 + (flags & float_flag_divbyzero ? FPUS_ZE : 0) | 3000 + (flags & float_flag_overflow ? FPUS_OE : 0) | 3001 + (flags & float_flag_underflow ? FPUS_UE : 0) | 3002 + (flags & float_flag_inexact ? FPUS_PE : 0) | 3003 + (flags & float_flag_output_denormal ? FPUS_UE | FPUS_PE : 3004 + 0)); 3005 + } 3006 + } 3007 + 3008 + void helper_update_mxcsr(CPUX86State *env) 3009 + { 3010 + update_mxcsr_from_sse_status(env); 2976 3011 } 2977 3012 2978 3013 void helper_ldmxcsr(CPUX86State *env, uint32_t val)
+1
target/i386/gdbstub.c
··· 184 184 return gdb_get_reg32(mem_buf, 0); /* fop */ 185 185 186 186 case IDX_MXCSR_REG: 187 + update_mxcsr_from_sse_status(env); 187 188 return gdb_get_reg32(mem_buf, env->mxcsr); 188 189 189 190 case IDX_CTL_CR0_REG:
+4 -2
target/i386/helper.c
··· 370 370 dump_apic_lvt("LVTTHMR", lvt[APIC_LVT_THERMAL], false); 371 371 dump_apic_lvt("LVTT", lvt[APIC_LVT_TIMER], true); 372 372 373 - qemu_printf("Timer\t DCR=0x%x (divide by %u) initial_count = %u\n", 373 + qemu_printf("Timer\t DCR=0x%x (divide by %u) initial_count = %u" 374 + " current_count = %u\n", 374 375 s->divide_conf & APIC_DCR_MASK, 375 376 divider_conf(s->divide_conf), 376 - s->initial_count); 377 + s->initial_count, apic_get_current_count(s)); 377 378 378 379 qemu_printf("SPIV\t 0x%08x APIC %s, focus=%s, spurious vec %u\n", 379 380 s->spurious_vec, ··· 544 545 for(i = 0; i < 8; i++) { 545 546 fptag |= ((!env->fptags[i]) << i); 546 547 } 548 + update_mxcsr_from_sse_status(env); 547 549 qemu_fprintf(f, "FCW=%04x FSW=%04x [ST=%d] FTW=%02x MXCSR=%08x\n", 548 550 env->fpuc, 549 551 (env->fpus & ~0x3800) | (env->fpstt & 0x7) << 11,
+1
target/i386/helper.h
··· 207 207 /* MMX/SSE */ 208 208 209 209 DEF_HELPER_2(ldmxcsr, void, env, i32) 210 + DEF_HELPER_1(update_mxcsr, void, env) 210 211 DEF_HELPER_1(enter_mmx, void, env) 211 212 DEF_HELPER_1(emms, void, env) 212 213 DEF_HELPER_3(movq, void, env, ptr, ptr)
+27 -110
target/i386/hvf/hvf.c
··· 282 282 } 283 283 } 284 284 285 - /* TODO: synchronize vcpu state */ 286 285 static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) 287 286 { 288 - CPUState *cpu_state = cpu; 289 - if (cpu_state->vcpu_dirty == 0) { 290 - hvf_get_registers(cpu_state); 287 + if (!cpu->vcpu_dirty) { 288 + hvf_get_registers(cpu); 289 + cpu->vcpu_dirty = true; 291 290 } 292 - 293 - cpu_state->vcpu_dirty = 1; 294 291 } 295 292 296 - void hvf_cpu_synchronize_state(CPUState *cpu_state) 293 + void hvf_cpu_synchronize_state(CPUState *cpu) 297 294 { 298 - if (cpu_state->vcpu_dirty == 0) { 299 - run_on_cpu(cpu_state, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL); 295 + if (!cpu->vcpu_dirty) { 296 + run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL); 300 297 } 301 298 } 302 299 303 - static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg) 300 + static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, 301 + run_on_cpu_data arg) 304 302 { 305 - CPUState *cpu_state = cpu; 306 - hvf_put_registers(cpu_state); 307 - cpu_state->vcpu_dirty = false; 303 + hvf_put_registers(cpu); 304 + cpu->vcpu_dirty = false; 308 305 } 309 306 310 - void hvf_cpu_synchronize_post_reset(CPUState *cpu_state) 307 + void hvf_cpu_synchronize_post_reset(CPUState *cpu) 311 308 { 312 - run_on_cpu(cpu_state, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); 309 + run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); 313 310 } 314 311 315 312 static void do_hvf_cpu_synchronize_post_init(CPUState *cpu, 316 313 run_on_cpu_data arg) 317 314 { 318 - CPUState *cpu_state = cpu; 319 - hvf_put_registers(cpu_state); 320 - cpu_state->vcpu_dirty = false; 315 + hvf_put_registers(cpu); 316 + cpu->vcpu_dirty = false; 317 + } 318 + 319 + void hvf_cpu_synchronize_post_init(CPUState *cpu) 320 + { 321 + run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); 322 + } 323 + 324 + static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu, 325 + run_on_cpu_data arg) 326 + { 327 + cpu->vcpu_dirty = true; 321 328 } 322 329 323 - void hvf_cpu_synchronize_post_init(CPUState *cpu_state) 330 + void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu) 324 331 { 325 - run_on_cpu(cpu_state, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); 332 + run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL); 326 333 } 327 334 328 335 static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual) ··· 440 447 .log_stop = hvf_log_stop, 441 448 .log_sync = hvf_log_sync, 442 449 }; 443 - 444 - void hvf_reset_vcpu(CPUState *cpu) { 445 - uint64_t pdpte[4] = {0, 0, 0, 0}; 446 - int i; 447 - 448 - /* TODO: this shouldn't be needed; there is already a call to 449 - * cpu_synchronize_all_post_reset in vl.c 450 - */ 451 - wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, 0); 452 - wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, 0); 453 - 454 - /* Initialize PDPTE */ 455 - for (i = 0; i < 4; i++) { 456 - wvmcs(cpu->hvf_fd, VMCS_GUEST_PDPTE0 + i * 2, pdpte[i]); 457 - } 458 - 459 - macvm_set_cr0(cpu->hvf_fd, 0x60000010); 460 - 461 - wvmcs(cpu->hvf_fd, VMCS_CR4_MASK, CR4_VMXE_MASK); 462 - wvmcs(cpu->hvf_fd, VMCS_CR4_SHADOW, 0x0); 463 - wvmcs(cpu->hvf_fd, VMCS_GUEST_CR4, CR4_VMXE_MASK); 464 - 465 - /* set VMCS guest state fields */ 466 - wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_SELECTOR, 0xf000); 467 - wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_LIMIT, 0xffff); 468 - wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_ACCESS_RIGHTS, 0x9b); 469 - wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_BASE, 0xffff0000); 470 - 471 - wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_SELECTOR, 0); 472 - wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_LIMIT, 0xffff); 473 - wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_ACCESS_RIGHTS, 0x93); 474 - wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_BASE, 0); 475 - 476 - wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_SELECTOR, 0); 477 - wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_LIMIT, 0xffff); 478 - wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_ACCESS_RIGHTS, 0x93); 479 - wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_BASE, 0); 480 - 481 - wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_SELECTOR, 0); 482 - wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_LIMIT, 0xffff); 483 - wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_ACCESS_RIGHTS, 0x93); 484 - wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, 0); 485 - 486 - wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_SELECTOR, 0); 487 - wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_LIMIT, 0xffff); 488 - wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_ACCESS_RIGHTS, 0x93); 489 - wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, 0); 490 - 491 - wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_SELECTOR, 0); 492 - wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_LIMIT, 0xffff); 493 - wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_ACCESS_RIGHTS, 0x93); 494 - wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_BASE, 0); 495 - 496 - wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_SELECTOR, 0); 497 - wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT, 0); 498 - wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_ACCESS_RIGHTS, 0x10000); 499 - wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE, 0); 500 - 501 - wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_SELECTOR, 0); 502 - wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_LIMIT, 0); 503 - wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_ACCESS_RIGHTS, 0x83); 504 - wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_BASE, 0); 505 - 506 - wvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT, 0); 507 - wvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE, 0); 508 - 509 - wvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT, 0); 510 - wvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE, 0); 511 - 512 - /*wvmcs(cpu->hvf_fd, VMCS_GUEST_CR2, 0x0);*/ 513 - wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, 0x0); 514 - 515 - wreg(cpu->hvf_fd, HV_X86_RIP, 0xfff0); 516 - wreg(cpu->hvf_fd, HV_X86_RDX, 0x623); 517 - wreg(cpu->hvf_fd, HV_X86_RFLAGS, 0x2); 518 - wreg(cpu->hvf_fd, HV_X86_RSP, 0x0); 519 - wreg(cpu->hvf_fd, HV_X86_RAX, 0x0); 520 - wreg(cpu->hvf_fd, HV_X86_RBX, 0x0); 521 - wreg(cpu->hvf_fd, HV_X86_RCX, 0x0); 522 - wreg(cpu->hvf_fd, HV_X86_RSI, 0x0); 523 - wreg(cpu->hvf_fd, HV_X86_RDI, 0x0); 524 - wreg(cpu->hvf_fd, HV_X86_RBP, 0x0); 525 - 526 - for (int i = 0; i < 8; i++) { 527 - wreg(cpu->hvf_fd, HV_X86_R8 + i, 0x0); 528 - } 529 - 530 - hv_vcpu_invalidate_tlb(cpu->hvf_fd); 531 - hv_vcpu_flush(cpu->hvf_fd); 532 - } 533 450 534 451 void hvf_vcpu_destroy(CPUState *cpu) 535 452 {
+12 -5
target/i386/hvf/vmx.h
··· 121 121 uint64_t pdpte[4] = {0, 0, 0, 0}; 122 122 uint64_t efer = rvmcs(vcpu, VMCS_GUEST_IA32_EFER); 123 123 uint64_t old_cr0 = rvmcs(vcpu, VMCS_GUEST_CR0); 124 + uint64_t changed_cr0 = old_cr0 ^ cr0; 124 125 uint64_t mask = CR0_PG | CR0_CD | CR0_NW | CR0_NE | CR0_ET; 126 + uint64_t entry_ctls; 125 127 126 128 if ((cr0 & CR0_PG) && (rvmcs(vcpu, VMCS_GUEST_CR4) & CR4_PAE) && 127 129 !(efer & MSR_EFER_LME)) { ··· 138 140 wvmcs(vcpu, VMCS_CR0_SHADOW, cr0); 139 141 140 142 if (efer & MSR_EFER_LME) { 141 - if (!(old_cr0 & CR0_PG) && (cr0 & CR0_PG)) { 142 - enter_long_mode(vcpu, cr0, efer); 143 + if (changed_cr0 & CR0_PG) { 144 + if (cr0 & CR0_PG) { 145 + enter_long_mode(vcpu, cr0, efer); 146 + } else { 147 + exit_long_mode(vcpu, cr0, efer); 148 + } 143 149 } 144 - if (/*(old_cr0 & CR0_PG) &&*/ !(cr0 & CR0_PG)) { 145 - exit_long_mode(vcpu, cr0, efer); 146 - } 150 + } else { 151 + entry_ctls = rvmcs(vcpu, VMCS_ENTRY_CTLS); 152 + wvmcs(vcpu, VMCS_ENTRY_CTLS, entry_ctls & ~VM_ENTRY_GUEST_LMA); 147 153 } 148 154 149 155 /* Filter new CR0 after we are finished examining it above. */ ··· 173 179 174 180 /* BUG, should take considering overlap.. */ 175 181 wreg(cpu->hvf_fd, HV_X86_RIP, rip); 182 + env->eip = rip; 176 183 177 184 /* after moving forward in rip, we need to clean INTERRUPTABILITY */ 178 185 val = rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
+39 -14
target/i386/kvm.c
··· 411 411 if (host_tsx_blacklisted()) { 412 412 ret &= ~(CPUID_7_0_EBX_RTM | CPUID_7_0_EBX_HLE); 413 413 } 414 - } else if (function == 7 && index == 0 && reg == R_ECX) { 415 - if (enable_cpu_pm) { 416 - ret |= CPUID_7_0_ECX_WAITPKG; 417 - } else { 418 - ret &= ~CPUID_7_0_ECX_WAITPKG; 419 - } 420 414 } else if (function == 7 && index == 0 && reg == R_EDX) { 421 415 /* 422 416 * Linux v4.17-v4.20 incorrectly return ARCH_CAPABILITIES on SVM hosts. ··· 1840 1834 if (max_nested_state_len > 0) { 1841 1835 assert(max_nested_state_len >= offsetof(struct kvm_nested_state, data)); 1842 1836 1843 - if (cpu_has_vmx(env)) { 1837 + if (cpu_has_vmx(env) || cpu_has_svm(env)) { 1844 1838 struct kvm_vmx_nested_state_hdr *vmx_hdr; 1845 1839 1846 1840 env->nested_state = g_malloc0(max_nested_state_len); 1847 1841 env->nested_state->size = max_nested_state_len; 1848 1842 env->nested_state->format = KVM_STATE_NESTED_FORMAT_VMX; 1849 1843 1850 - vmx_hdr = &env->nested_state->hdr.vmx; 1851 - vmx_hdr->vmxon_pa = -1ull; 1852 - vmx_hdr->vmcs12_pa = -1ull; 1844 + if (cpu_has_vmx(env)) { 1845 + vmx_hdr = &env->nested_state->hdr.vmx; 1846 + vmx_hdr->vmxon_pa = -1ull; 1847 + vmx_hdr->vmcs12_pa = -1ull; 1848 + } 1853 1849 } 1854 1850 } 1855 1851 ··· 3873 3869 return 0; 3874 3870 } 3875 3871 3872 + /* 3873 + * Copy flags that are affected by reset from env->hflags and env->hflags2. 3874 + */ 3875 + if (env->hflags & HF_GUEST_MASK) { 3876 + env->nested_state->flags |= KVM_STATE_NESTED_GUEST_MODE; 3877 + } else { 3878 + env->nested_state->flags &= ~KVM_STATE_NESTED_GUEST_MODE; 3879 + } 3880 + if (env->hflags2 & HF2_GIF_MASK) { 3881 + env->nested_state->flags |= KVM_STATE_NESTED_GIF_SET; 3882 + } else { 3883 + env->nested_state->flags &= ~KVM_STATE_NESTED_GIF_SET; 3884 + } 3885 + 3876 3886 assert(env->nested_state->size <= max_nested_state_len); 3877 3887 return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_NESTED_STATE, env->nested_state); 3878 3888 } ··· 3901 3911 return ret; 3902 3912 } 3903 3913 3914 + /* 3915 + * Copy flags that are affected by reset to env->hflags and env->hflags2. 3916 + */ 3904 3917 if (env->nested_state->flags & KVM_STATE_NESTED_GUEST_MODE) { 3905 3918 env->hflags |= HF_GUEST_MASK; 3906 3919 } else { 3907 3920 env->hflags &= ~HF_GUEST_MASK; 3921 + } 3922 + if (env->nested_state->flags & KVM_STATE_NESTED_GIF_SET) { 3923 + env->hflags2 |= HF2_GIF_MASK; 3924 + } else { 3925 + env->hflags2 &= ~HF2_GIF_MASK; 3908 3926 } 3909 3927 3910 3928 return ret; ··· 3917 3935 3918 3936 assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu)); 3919 3937 3938 + /* must be before kvm_put_nested_state so that EFER.SVME is set */ 3939 + ret = kvm_put_sregs(x86_cpu); 3940 + if (ret < 0) { 3941 + return ret; 3942 + } 3943 + 3920 3944 if (level >= KVM_PUT_RESET_STATE) { 3921 3945 ret = kvm_put_nested_state(x86_cpu); 3922 3946 if (ret < 0) { ··· 3947 3971 return ret; 3948 3972 } 3949 3973 ret = kvm_put_xcrs(x86_cpu); 3950 - if (ret < 0) { 3951 - return ret; 3952 - } 3953 - ret = kvm_put_sregs(x86_cpu); 3954 3974 if (ret < 0) { 3955 3975 return ret; 3956 3976 } ··· 4704 4724 { 4705 4725 abort(); 4706 4726 } 4727 + 4728 + bool kvm_has_waitpkg(void) 4729 + { 4730 + return has_msr_umwait; 4731 + }
+1
target/i386/kvm_i386.h
··· 44 44 45 45 bool kvm_enable_x2apic(void); 46 46 bool kvm_has_x2apic_api(void); 47 + bool kvm_has_waitpkg(void); 47 48 48 49 bool kvm_hv_vpindex_settable(void); 49 50
+30 -1
target/i386/machine.c
··· 1071 1071 } 1072 1072 }; 1073 1073 1074 + static bool svm_nested_state_needed(void *opaque) 1075 + { 1076 + struct kvm_nested_state *nested_state = opaque; 1077 + 1078 + /* 1079 + * HF_GUEST_MASK and HF2_GIF_MASK are already serialized 1080 + * via hflags and hflags2, all that's left is the opaque 1081 + * nested state blob. 1082 + */ 1083 + return (nested_state->format == KVM_STATE_NESTED_FORMAT_SVM && 1084 + nested_state->size > offsetof(struct kvm_nested_state, data)); 1085 + } 1086 + 1087 + static const VMStateDescription vmstate_svm_nested_state = { 1088 + .name = "cpu/kvm_nested_state/svm", 1089 + .version_id = 1, 1090 + .minimum_version_id = 1, 1091 + .needed = svm_nested_state_needed, 1092 + .fields = (VMStateField[]) { 1093 + VMSTATE_U64(hdr.svm.vmcb_pa, struct kvm_nested_state), 1094 + VMSTATE_UINT8_ARRAY(data.svm[0].vmcb12, 1095 + struct kvm_nested_state, 1096 + KVM_STATE_NESTED_SVM_VMCB_SIZE), 1097 + VMSTATE_END_OF_LIST() 1098 + } 1099 + }; 1100 + 1074 1101 static bool nested_state_needed(void *opaque) 1075 1102 { 1076 1103 X86CPU *cpu = opaque; 1077 1104 CPUX86State *env = &cpu->env; 1078 1105 1079 1106 return (env->nested_state && 1080 - vmx_nested_state_needed(env->nested_state)); 1107 + (vmx_nested_state_needed(env->nested_state) || 1108 + svm_nested_state_needed(env->nested_state))); 1081 1109 } 1082 1110 1083 1111 static int nested_state_post_load(void *opaque, int version_id) ··· 1139 1167 }, 1140 1168 .subsections = (const VMStateDescription*[]) { 1141 1169 &vmstate_vmx_nested_state, 1170 + &vmstate_svm_nested_state, 1142 1171 NULL 1143 1172 } 1144 1173 };
+1 -9
target/i386/monitor.c
··· 726 726 727 727 SevCapability *qmp_query_sev_capabilities(Error **errp) 728 728 { 729 - SevCapability *data; 730 - 731 - data = sev_get_capabilities(); 732 - if (!data) { 733 - error_setg(errp, "SEV feature is not available"); 734 - return NULL; 735 - } 736 - 737 - return data; 729 + return sev_get_capabilities(errp); 738 730 }
+16 -12
target/i386/ops_sse.h
··· 843 843 844 844 void helper_rsqrtps(CPUX86State *env, ZMMReg *d, ZMMReg *s) 845 845 { 846 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 846 847 d->ZMM_S(0) = float32_div(float32_one, 847 848 float32_sqrt(s->ZMM_S(0), &env->sse_status), 848 849 &env->sse_status); ··· 855 856 d->ZMM_S(3) = float32_div(float32_one, 856 857 float32_sqrt(s->ZMM_S(3), &env->sse_status), 857 858 &env->sse_status); 859 + set_float_exception_flags(old_flags, &env->sse_status); 858 860 } 859 861 860 862 void helper_rsqrtss(CPUX86State *env, ZMMReg *d, ZMMReg *s) 861 863 { 864 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 862 865 d->ZMM_S(0) = float32_div(float32_one, 863 866 float32_sqrt(s->ZMM_S(0), &env->sse_status), 864 867 &env->sse_status); 868 + set_float_exception_flags(old_flags, &env->sse_status); 865 869 } 866 870 867 871 void helper_rcpps(CPUX86State *env, ZMMReg *d, ZMMReg *s) 868 872 { 873 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 869 874 d->ZMM_S(0) = float32_div(float32_one, s->ZMM_S(0), &env->sse_status); 870 875 d->ZMM_S(1) = float32_div(float32_one, s->ZMM_S(1), &env->sse_status); 871 876 d->ZMM_S(2) = float32_div(float32_one, s->ZMM_S(2), &env->sse_status); 872 877 d->ZMM_S(3) = float32_div(float32_one, s->ZMM_S(3), &env->sse_status); 878 + set_float_exception_flags(old_flags, &env->sse_status); 873 879 } 874 880 875 881 void helper_rcpss(CPUX86State *env, ZMMReg *d, ZMMReg *s) 876 882 { 883 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 877 884 d->ZMM_S(0) = float32_div(float32_one, s->ZMM_S(0), &env->sse_status); 885 + set_float_exception_flags(old_flags, &env->sse_status); 878 886 } 879 887 880 888 static inline uint64_t helper_extrq(uint64_t src, int shift, int len) ··· 1764 1772 void glue(helper_roundps, SUFFIX)(CPUX86State *env, Reg *d, Reg *s, 1765 1773 uint32_t mode) 1766 1774 { 1775 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 1767 1776 signed char prev_rounding_mode; 1768 1777 1769 1778 prev_rounding_mode = env->sse_status.float_rounding_mode; ··· 1789 1798 d->ZMM_S(2) = float32_round_to_int(s->ZMM_S(2), &env->sse_status); 1790 1799 d->ZMM_S(3) = float32_round_to_int(s->ZMM_S(3), &env->sse_status); 1791 1800 1792 - #if 0 /* TODO */ 1793 - if (mode & (1 << 3)) { 1801 + if (mode & (1 << 3) && !(old_flags & float_flag_inexact)) { 1794 1802 set_float_exception_flags(get_float_exception_flags(&env->sse_status) & 1795 1803 ~float_flag_inexact, 1796 1804 &env->sse_status); 1797 1805 } 1798 - #endif 1799 1806 env->sse_status.float_rounding_mode = prev_rounding_mode; 1800 1807 } 1801 1808 1802 1809 void glue(helper_roundpd, SUFFIX)(CPUX86State *env, Reg *d, Reg *s, 1803 1810 uint32_t mode) 1804 1811 { 1812 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 1805 1813 signed char prev_rounding_mode; 1806 1814 1807 1815 prev_rounding_mode = env->sse_status.float_rounding_mode; ··· 1825 1833 d->ZMM_D(0) = float64_round_to_int(s->ZMM_D(0), &env->sse_status); 1826 1834 d->ZMM_D(1) = float64_round_to_int(s->ZMM_D(1), &env->sse_status); 1827 1835 1828 - #if 0 /* TODO */ 1829 - if (mode & (1 << 3)) { 1836 + if (mode & (1 << 3) && !(old_flags & float_flag_inexact)) { 1830 1837 set_float_exception_flags(get_float_exception_flags(&env->sse_status) & 1831 1838 ~float_flag_inexact, 1832 1839 &env->sse_status); 1833 1840 } 1834 - #endif 1835 1841 env->sse_status.float_rounding_mode = prev_rounding_mode; 1836 1842 } 1837 1843 1838 1844 void glue(helper_roundss, SUFFIX)(CPUX86State *env, Reg *d, Reg *s, 1839 1845 uint32_t mode) 1840 1846 { 1847 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 1841 1848 signed char prev_rounding_mode; 1842 1849 1843 1850 prev_rounding_mode = env->sse_status.float_rounding_mode; ··· 1860 1867 1861 1868 d->ZMM_S(0) = float32_round_to_int(s->ZMM_S(0), &env->sse_status); 1862 1869 1863 - #if 0 /* TODO */ 1864 - if (mode & (1 << 3)) { 1870 + if (mode & (1 << 3) && !(old_flags & float_flag_inexact)) { 1865 1871 set_float_exception_flags(get_float_exception_flags(&env->sse_status) & 1866 1872 ~float_flag_inexact, 1867 1873 &env->sse_status); 1868 1874 } 1869 - #endif 1870 1875 env->sse_status.float_rounding_mode = prev_rounding_mode; 1871 1876 } 1872 1877 1873 1878 void glue(helper_roundsd, SUFFIX)(CPUX86State *env, Reg *d, Reg *s, 1874 1879 uint32_t mode) 1875 1880 { 1881 + uint8_t old_flags = get_float_exception_flags(&env->sse_status); 1876 1882 signed char prev_rounding_mode; 1877 1883 1878 1884 prev_rounding_mode = env->sse_status.float_rounding_mode; ··· 1895 1901 1896 1902 d->ZMM_D(0) = float64_round_to_int(s->ZMM_D(0), &env->sse_status); 1897 1903 1898 - #if 0 /* TODO */ 1899 - if (mode & (1 << 3)) { 1904 + if (mode & (1 << 3) && !(old_flags & float_flag_inexact)) { 1900 1905 set_float_exception_flags(get_float_exception_flags(&env->sse_status) & 1901 1906 ~float_flag_inexact, 1902 1907 &env->sse_status); 1903 1908 } 1904 - #endif 1905 1909 env->sse_status.float_rounding_mode = prev_rounding_mode; 1906 1910 } 1907 1911
+2 -1
target/i386/sev-stub.c
··· 44 44 return NULL; 45 45 } 46 46 47 - SevCapability *sev_get_capabilities(void) 47 + SevCapability *sev_get_capabilities(Error **errp) 48 48 { 49 + error_setg(errp, "SEV is not available in this QEMU"); 49 50 return NULL; 50 51 }
+18 -9
target/i386/sev.c
··· 399 399 400 400 static int 401 401 sev_get_pdh_info(int fd, guchar **pdh, size_t *pdh_len, guchar **cert_chain, 402 - size_t *cert_chain_len) 402 + size_t *cert_chain_len, Error **errp) 403 403 { 404 404 guchar *pdh_data = NULL; 405 405 guchar *cert_chain_data = NULL; ··· 410 410 r = sev_platform_ioctl(fd, SEV_PDH_CERT_EXPORT, &export, &err); 411 411 if (r < 0) { 412 412 if (err != SEV_RET_INVALID_LEN) { 413 - error_report("failed to export PDH cert ret=%d fw_err=%d (%s)", 414 - r, err, fw_error_to_str(err)); 413 + error_setg(errp, "failed to export PDH cert ret=%d fw_err=%d (%s)", 414 + r, err, fw_error_to_str(err)); 415 415 return 1; 416 416 } 417 417 } ··· 423 423 424 424 r = sev_platform_ioctl(fd, SEV_PDH_CERT_EXPORT, &export, &err); 425 425 if (r < 0) { 426 - error_report("failed to export PDH cert ret=%d fw_err=%d (%s)", 427 - r, err, fw_error_to_str(err)); 426 + error_setg(errp, "failed to export PDH cert ret=%d fw_err=%d (%s)", 427 + r, err, fw_error_to_str(err)); 428 428 goto e_free; 429 429 } 430 430 ··· 441 441 } 442 442 443 443 SevCapability * 444 - sev_get_capabilities(void) 444 + sev_get_capabilities(Error **errp) 445 445 { 446 446 SevCapability *cap = NULL; 447 447 guchar *pdh_data = NULL; ··· 450 450 uint32_t ebx; 451 451 int fd; 452 452 453 + if (!kvm_enabled()) { 454 + error_setg(errp, "KVM not enabled"); 455 + return NULL; 456 + } 457 + if (kvm_vm_ioctl(kvm_state, KVM_MEMORY_ENCRYPT_OP, NULL) < 0) { 458 + error_setg(errp, "SEV is not enabled in KVM"); 459 + return NULL; 460 + } 461 + 453 462 fd = open(DEFAULT_SEV_DEVICE, O_RDWR); 454 463 if (fd < 0) { 455 - error_report("%s: Failed to open %s '%s'", __func__, 456 - DEFAULT_SEV_DEVICE, strerror(errno)); 464 + error_setg_errno(errp, errno, "Failed to open %s", 465 + DEFAULT_SEV_DEVICE); 457 466 return NULL; 458 467 } 459 468 460 469 if (sev_get_pdh_info(fd, &pdh_data, &pdh_len, 461 - &cert_chain_data, &cert_chain_len)) { 470 + &cert_chain_data, &cert_chain_len, errp)) { 462 471 goto out; 463 472 } 464 473
+1 -1
target/i386/sev_i386.h
··· 34 34 extern uint32_t sev_get_cbit_position(void); 35 35 extern uint32_t sev_get_reduced_phys_bits(void); 36 36 extern char *sev_get_launch_measurement(void); 37 - extern SevCapability *sev_get_capabilities(void); 37 + extern SevCapability *sev_get_capabilities(Error **errp); 38 38 39 39 #endif
+1
target/i386/svm.h
··· 135 135 #define SVM_NPT_PAE (1 << 0) 136 136 #define SVM_NPT_LMA (1 << 1) 137 137 #define SVM_NPT_NXE (1 << 2) 138 + #define SVM_NPT_PSE (1 << 3) 138 139 139 140 #define SVM_NPTEXIT_P (1ULL << 0) 140 141 #define SVM_NPTEXIT_RW (1ULL << 1)
+6 -1
target/i386/svm_helper.c
··· 209 209 210 210 nested_ctl = x86_ldq_phys(cs, env->vm_vmcb + offsetof(struct vmcb, 211 211 control.nested_ctl)); 212 + 213 + env->nested_pg_mode = 0; 214 + 212 215 if (nested_ctl & SVM_NPT_ENABLED) { 213 216 env->nested_cr3 = x86_ldq_phys(cs, 214 217 env->vm_vmcb + offsetof(struct vmcb, 215 218 control.nested_cr3)); 216 219 env->hflags2 |= HF2_NPT_MASK; 217 220 218 - env->nested_pg_mode = 0; 219 221 if (env->cr[4] & CR4_PAE_MASK) { 220 222 env->nested_pg_mode |= SVM_NPT_PAE; 223 + } 224 + if (env->cr[4] & CR4_PSE_MASK) { 225 + env->nested_pg_mode |= SVM_NPT_PSE; 221 226 } 222 227 if (env->hflags & HF_LMA_MASK) { 223 228 env->nested_pg_mode |= SVM_NPT_LMA;
+25
target/i386/tcg-stub.c
··· 1 + /* 2 + * x86 FPU, MMX/3DNow!/SSE/SSE2/SSE3/SSSE3/SSE4/PNI helpers 3 + * 4 + * Copyright (c) 2003 Fabrice Bellard 5 + * 6 + * This library is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU Lesser General Public 8 + * License as published by the Free Software Foundation; either 9 + * version 2 of the License, or (at your option) any later version. 10 + * 11 + * This library is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 14 + * Lesser General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU Lesser General Public 17 + * License along with this library; if not, see <http://www.gnu.org/licenses/>. 18 + */ 19 + 20 + #include "qemu/osdep.h" 21 + #include "cpu.h" 22 + 23 + void update_mxcsr_from_sse_status(CPUX86State *env) 24 + { 25 + }
+17 -19
target/i386/translate.c
··· 1128 1128 1129 1129 static inline void gen_ins(DisasContext *s, MemOp ot) 1130 1130 { 1131 - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { 1132 - gen_io_start(); 1133 - } 1134 1131 gen_string_movl_A0_EDI(s); 1135 1132 /* Note: we must do this dummy write first to be restartable in 1136 1133 case of page fault. */ ··· 1143 1140 gen_op_movl_T0_Dshift(s, ot); 1144 1141 gen_op_add_reg_T0(s, s->aflag, R_EDI); 1145 1142 gen_bpt_io(s, s->tmp2_i32, ot); 1146 - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { 1147 - gen_io_end(); 1148 - } 1149 1143 } 1150 1144 1151 1145 static inline void gen_outs(DisasContext *s, MemOp ot) 1152 1146 { 1153 - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { 1154 - gen_io_start(); 1155 - } 1156 1147 gen_string_movl_A0_ESI(s); 1157 1148 gen_op_ld_v(s, ot, s->T0, s->A0); 1158 1149 ··· 1163 1154 gen_op_movl_T0_Dshift(s, ot); 1164 1155 gen_op_add_reg_T0(s, s->aflag, R_ESI); 1165 1156 gen_bpt_io(s, s->tmp2_i32, ot); 1166 - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { 1167 - gen_io_end(); 1168 - } 1169 1157 } 1170 1158 1171 1159 /* same method as Valgrind : we generate jumps to current or next ··· 6400 6388 tcg_gen_ext16u_tl(s->T0, cpu_regs[R_EDX]); 6401 6389 gen_check_io(s, ot, pc_start - s->cs_base, 6402 6390 SVM_IOIO_TYPE_MASK | svm_is_rep(prefixes) | 4); 6391 + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { 6392 + gen_io_start(); 6393 + } 6403 6394 if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { 6404 6395 gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_base); 6396 + /* jump generated by gen_repz_ins */ 6405 6397 } else { 6406 6398 gen_ins(s, ot); 6407 6399 if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { ··· 6415 6407 tcg_gen_ext16u_tl(s->T0, cpu_regs[R_EDX]); 6416 6408 gen_check_io(s, ot, pc_start - s->cs_base, 6417 6409 svm_is_rep(prefixes) | 4); 6410 + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { 6411 + gen_io_start(); 6412 + } 6418 6413 if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { 6419 6414 gen_repz_outs(s, ot, pc_start - s->cs_base, s->pc - s->cs_base); 6415 + /* jump generated by gen_repz_outs */ 6420 6416 } else { 6421 6417 gen_outs(s, ot); 6422 6418 if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { ··· 7583 7579 CASE_MODRM_OP(4): /* smsw */ 7584 7580 gen_svm_check_intercept(s, pc_start, SVM_EXIT_READ_CR0); 7585 7581 tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0])); 7586 - if (CODE64(s)) { 7587 - mod = (modrm >> 6) & 3; 7588 - ot = (mod != 3 ? MO_16 : s->dflag); 7589 - } else { 7590 - ot = MO_16; 7591 - } 7582 + /* 7583 + * In 32-bit mode, the higher 16 bits of the destination 7584 + * register are undefined. In practice CR0[31:0] is stored 7585 + * just like in 64-bit mode. 7586 + */ 7587 + mod = (modrm >> 6) & 3; 7588 + ot = (mod != 3 ? MO_16 : s->dflag); 7592 7589 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); 7593 7590 break; 7594 7591 case 0xee: /* rdpkru */ ··· 8039 8036 gen_helper_read_crN(s->T0, cpu_env, tcg_const_i32(reg)); 8040 8037 gen_op_mov_reg_v(s, ot, rm, s->T0); 8041 8038 if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { 8042 - gen_io_end(); 8039 + gen_jmp(s, s->pc - s->cs_base); 8043 8040 } 8044 8041 } 8045 8042 break; ··· 8157 8154 gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); 8158 8155 break; 8159 8156 } 8157 + gen_helper_update_mxcsr(cpu_env); 8160 8158 gen_lea_modrm(env, s, modrm); 8161 8159 tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, mxcsr)); 8162 8160 gen_op_st_v(s, MO_32, s->T0, s->A0);
+1 -1
tests/Makefile.include
··· 637 637 { export MALLOC_PERTURB_=$${MALLOC_PERTURB_:-$$(( $${RANDOM:-0} % 255 + 1))} $2; \ 638 638 $(foreach COMMAND, $1, \ 639 639 $(COMMAND) -m=$(SPEED) -k --tap < /dev/null \ 640 - | sed "s/^[a-z][a-z]* [0-9]* /&$(notdir $(COMMAND)) /" || true; ) } \ 640 + | sed "s/^\(not \)\?ok [0-9]* /&$(notdir $(COMMAND)) /" || true; ) } \ 641 641 | ./scripts/tap-merge.pl | tee "$@" \ 642 642 | ./scripts/tap-driver.pl $(if $(V),, --show-failures-only), \ 643 643 "TAP","$@")
+104 -5
tests/qtest/qmp-cmd-test.c
··· 200 200 } 201 201 } 202 202 203 - static void test_object_add_without_props(void) 203 + static void test_object_add_failure_modes(void) 204 204 { 205 205 QTestState *qts; 206 206 QDict *resp; 207 207 208 + /* attempt to create an object without props */ 208 209 qts = qtest_init(common_args); 209 210 resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 210 - " {'qom-type': 'memory-backend-ram', 'id': 'ram1' } }"); 211 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1' } }"); 212 + g_assert_nonnull(resp); 213 + qmp_assert_error_class(resp, "GenericError"); 214 + 215 + /* attempt to create an object without qom-type */ 216 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 217 + " {'id': 'ram1' } }"); 218 + g_assert_nonnull(resp); 219 + qmp_assert_error_class(resp, "GenericError"); 220 + 221 + /* attempt to delete an object that does not exist */ 222 + resp = qtest_qmp(qts, "{'execute': 'object-del', 'arguments':" 223 + " {'id': 'ram1' } }"); 224 + g_assert_nonnull(resp); 225 + qmp_assert_error_class(resp, "GenericError"); 226 + 227 + /* attempt to create 2 objects with duplicate id */ 228 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 229 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1'," 230 + " 'props': {'size': 1048576 } } }"); 231 + g_assert_nonnull(resp); 232 + g_assert(qdict_haskey(resp, "return")); 233 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 234 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1'," 235 + " 'props': {'size': 1048576 } } }"); 236 + g_assert_nonnull(resp); 237 + qmp_assert_error_class(resp, "GenericError"); 238 + 239 + /* delete ram1 object */ 240 + resp = qtest_qmp(qts, "{'execute': 'object-del', 'arguments':" 241 + " {'id': 'ram1' } }"); 242 + g_assert_nonnull(resp); 243 + g_assert(qdict_haskey(resp, "return")); 244 + 245 + /* attempt to create an object with a property of a wrong type */ 246 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 247 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1'," 248 + " 'props': {'size': '1048576' } } }"); 249 + g_assert_nonnull(resp); 250 + /* now do it right */ 251 + qmp_assert_error_class(resp, "GenericError"); 252 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 253 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1'," 254 + " 'props': {'size': 1048576 } } }"); 255 + g_assert_nonnull(resp); 256 + g_assert(qdict_haskey(resp, "return")); 257 + 258 + /* delete ram1 object */ 259 + resp = qtest_qmp(qts, "{'execute': 'object-del', 'arguments':" 260 + " {'id': 'ram1' } }"); 261 + g_assert_nonnull(resp); 262 + g_assert(qdict_haskey(resp, "return")); 263 + 264 + /* attempt to create an object without the id */ 265 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 266 + " {'qom-type': 'memory-backend-ram'," 267 + " 'props': {'size': 1048576 } } }"); 268 + g_assert_nonnull(resp); 269 + qmp_assert_error_class(resp, "GenericError"); 270 + /* now do it right */ 271 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 272 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1'," 273 + " 'props': {'size': 1048576 } } }"); 274 + g_assert_nonnull(resp); 275 + g_assert(qdict_haskey(resp, "return")); 276 + 277 + /* delete ram1 object */ 278 + resp = qtest_qmp(qts, "{'execute': 'object-del', 'arguments':" 279 + " {'id': 'ram1' } }"); 280 + g_assert_nonnull(resp); 281 + g_assert(qdict_haskey(resp, "return")); 282 + 283 + /* attempt to set a non existing property */ 284 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 285 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1'," 286 + " 'props': {'sized': 1048576 } } }"); 211 287 g_assert_nonnull(resp); 212 288 qmp_assert_error_class(resp, "GenericError"); 289 + /* now do it right */ 290 + resp = qtest_qmp(qts, "{'execute': 'object-add', 'arguments':" 291 + " {'qom-type': 'memory-backend-ram', 'id': 'ram1'," 292 + " 'props': {'size': 1048576 } } }"); 293 + g_assert_nonnull(resp); 294 + g_assert(qdict_haskey(resp, "return")); 295 + 296 + /* delete ram1 object without id */ 297 + resp = qtest_qmp(qts, "{'execute': 'object-del', 'arguments':" 298 + " {'ida': 'ram1' } }"); 299 + g_assert_nonnull(resp); 300 + 301 + /* delete ram1 object */ 302 + resp = qtest_qmp(qts, "{'execute': 'object-del', 'arguments':" 303 + " {'id': 'ram1' } }"); 304 + g_assert_nonnull(resp); 305 + g_assert(qdict_haskey(resp, "return")); 306 + 307 + /* delete ram1 object that does not exist anymore*/ 308 + resp = qtest_qmp(qts, "{'execute': 'object-del', 'arguments':" 309 + " {'id': 'ram1' } }"); 310 + g_assert_nonnull(resp); 311 + qmp_assert_error_class(resp, "GenericError"); 312 + 213 313 qtest_quit(qts); 214 314 } 215 315 ··· 223 323 qmp_schema_init(&schema); 224 324 add_query_tests(&schema); 225 325 226 - qtest_add_func("qmp/object-add-without-props", 227 - test_object_add_without_props); 228 - /* TODO: add coverage of generic object-add failure modes */ 326 + qtest_add_func("qmp/object-add-failure-modes", 327 + test_object_add_failure_modes); 229 328 230 329 ret = g_test_run(); 231 330
+4
tests/tcg/i386/Makefile.target
··· 10 10 SKIP_I386_TESTS=test-i386-ssse3 11 11 X86_64_TESTS:=$(filter test-i386-ssse3, $(ALL_X86_TESTS)) 12 12 13 + test-i386-sse-exceptions: CFLAGS += -msse4.1 -mfpmath=sse 14 + run-test-i386-sse-exceptions: QEMU_OPTS += -cpu max 15 + run-plugin-test-i386-sse-exceptions-%: QEMU_OPTS += -cpu max 16 + 13 17 test-i386-pcmpistri: CFLAGS += -msse4.2 14 18 run-test-i386-pcmpistri: QEMU_OPTS += -cpu max 15 19 run-plugin-test-i386-pcmpistri-%: QEMU_OPTS += -cpu max
+813
tests/tcg/i386/test-i386-sse-exceptions.c
··· 1 + /* Test SSE exceptions. */ 2 + 3 + #include <float.h> 4 + #include <stdint.h> 5 + #include <stdio.h> 6 + 7 + volatile float f_res; 8 + volatile double d_res; 9 + 10 + volatile float f_snan = __builtin_nansf(""); 11 + volatile float f_half = 0.5f; 12 + volatile float f_third = 1.0f / 3.0f; 13 + volatile float f_nan = __builtin_nanl(""); 14 + volatile float f_inf = __builtin_inff(); 15 + volatile float f_ninf = -__builtin_inff(); 16 + volatile float f_one = 1.0f; 17 + volatile float f_two = 2.0f; 18 + volatile float f_zero = 0.0f; 19 + volatile float f_nzero = -0.0f; 20 + volatile float f_min = FLT_MIN; 21 + volatile float f_true_min = 0x1p-149f; 22 + volatile float f_max = FLT_MAX; 23 + volatile float f_nmax = -FLT_MAX; 24 + 25 + volatile double d_snan = __builtin_nans(""); 26 + volatile double d_half = 0.5; 27 + volatile double d_third = 1.0 / 3.0; 28 + volatile double d_nan = __builtin_nan(""); 29 + volatile double d_inf = __builtin_inf(); 30 + volatile double d_ninf = -__builtin_inf(); 31 + volatile double d_one = 1.0; 32 + volatile double d_two = 2.0; 33 + volatile double d_zero = 0.0; 34 + volatile double d_nzero = -0.0; 35 + volatile double d_min = DBL_MIN; 36 + volatile double d_true_min = 0x1p-1074; 37 + volatile double d_max = DBL_MAX; 38 + volatile double d_nmax = -DBL_MAX; 39 + 40 + volatile int32_t i32_max = INT32_MAX; 41 + 42 + #define IE (1 << 0) 43 + #define ZE (1 << 2) 44 + #define OE (1 << 3) 45 + #define UE (1 << 4) 46 + #define PE (1 << 5) 47 + #define EXC (IE | ZE | OE | UE | PE) 48 + 49 + uint32_t mxcsr_default = 0x1f80; 50 + uint32_t mxcsr_ftz = 0x9f80; 51 + 52 + int main(void) 53 + { 54 + uint32_t mxcsr; 55 + int32_t i32_res; 56 + int ret = 0; 57 + 58 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 59 + d_res = f_snan; 60 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 61 + if ((mxcsr & EXC) != IE) { 62 + printf("FAIL: widen float snan\n"); 63 + ret = 1; 64 + } 65 + 66 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 67 + f_res = d_min; 68 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 69 + if ((mxcsr & EXC) != (UE | PE)) { 70 + printf("FAIL: narrow float underflow\n"); 71 + ret = 1; 72 + } 73 + 74 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 75 + f_res = d_max; 76 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 77 + if ((mxcsr & EXC) != (OE | PE)) { 78 + printf("FAIL: narrow float overflow\n"); 79 + ret = 1; 80 + } 81 + 82 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 83 + f_res = d_third; 84 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 85 + if ((mxcsr & EXC) != PE) { 86 + printf("FAIL: narrow float inexact\n"); 87 + ret = 1; 88 + } 89 + 90 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 91 + f_res = d_snan; 92 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 93 + if ((mxcsr & EXC) != IE) { 94 + printf("FAIL: narrow float snan\n"); 95 + ret = 1; 96 + } 97 + 98 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 99 + __asm__ volatile ("roundss $4, %0, %0" : "=x" (f_res) : "0" (f_min)); 100 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 101 + if ((mxcsr & EXC) != PE) { 102 + printf("FAIL: roundss min\n"); 103 + ret = 1; 104 + } 105 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 106 + __asm__ volatile ("roundss $12, %0, %0" : "=x" (f_res) : "0" (f_min)); 107 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 108 + if ((mxcsr & EXC) != 0) { 109 + printf("FAIL: roundss no-inexact min\n"); 110 + ret = 1; 111 + } 112 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 113 + __asm__ volatile ("roundss $4, %0, %0" : "=x" (f_res) : "0" (f_snan)); 114 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 115 + if ((mxcsr & EXC) != IE) { 116 + printf("FAIL: roundss snan\n"); 117 + ret = 1; 118 + } 119 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 120 + __asm__ volatile ("roundss $12, %0, %0" : "=x" (f_res) : "0" (f_snan)); 121 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 122 + if ((mxcsr & EXC) != IE) { 123 + printf("FAIL: roundss no-inexact snan\n"); 124 + ret = 1; 125 + } 126 + 127 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 128 + __asm__ volatile ("roundsd $4, %0, %0" : "=x" (d_res) : "0" (d_min)); 129 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 130 + if ((mxcsr & EXC) != PE) { 131 + printf("FAIL: roundsd min\n"); 132 + ret = 1; 133 + } 134 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 135 + __asm__ volatile ("roundsd $12, %0, %0" : "=x" (d_res) : "0" (d_min)); 136 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 137 + if ((mxcsr & EXC) != 0) { 138 + printf("FAIL: roundsd no-inexact min\n"); 139 + ret = 1; 140 + } 141 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 142 + __asm__ volatile ("roundsd $4, %0, %0" : "=x" (d_res) : "0" (d_snan)); 143 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 144 + if ((mxcsr & EXC) != IE) { 145 + printf("FAIL: roundsd snan\n"); 146 + ret = 1; 147 + } 148 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 149 + __asm__ volatile ("roundsd $12, %0, %0" : "=x" (d_res) : "0" (d_snan)); 150 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 151 + if ((mxcsr & EXC) != IE) { 152 + printf("FAIL: roundsd no-inexact snan\n"); 153 + ret = 1; 154 + } 155 + 156 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 157 + __asm__ volatile ("comiss %1, %0" : : "x" (f_nan), "x" (f_zero)); 158 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 159 + if ((mxcsr & EXC) != IE) { 160 + printf("FAIL: comiss nan\n"); 161 + ret = 1; 162 + } 163 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 164 + __asm__ volatile ("ucomiss %1, %0" : : "x" (f_nan), "x" (f_zero)); 165 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 166 + if ((mxcsr & EXC) != 0) { 167 + printf("FAIL: ucomiss nan\n"); 168 + ret = 1; 169 + } 170 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 171 + __asm__ volatile ("ucomiss %1, %0" : : "x" (f_snan), "x" (f_zero)); 172 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 173 + if ((mxcsr & EXC) != IE) { 174 + printf("FAIL: ucomiss snan\n"); 175 + ret = 1; 176 + } 177 + 178 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 179 + __asm__ volatile ("comisd %1, %0" : : "x" (d_nan), "x" (d_zero)); 180 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 181 + if ((mxcsr & EXC) != IE) { 182 + printf("FAIL: comisd nan\n"); 183 + ret = 1; 184 + } 185 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 186 + __asm__ volatile ("ucomisd %1, %0" : : "x" (d_nan), "x" (d_zero)); 187 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 188 + if ((mxcsr & EXC) != 0) { 189 + printf("FAIL: ucomisd nan\n"); 190 + ret = 1; 191 + } 192 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 193 + __asm__ volatile ("ucomisd %1, %0" : : "x" (d_snan), "x" (d_zero)); 194 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 195 + if ((mxcsr & EXC) != IE) { 196 + printf("FAIL: ucomisd snan\n"); 197 + ret = 1; 198 + } 199 + 200 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 201 + f_res = f_max + f_max; 202 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 203 + if ((mxcsr & EXC) != (OE | PE)) { 204 + printf("FAIL: float add overflow\n"); 205 + ret = 1; 206 + } 207 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 208 + f_res = f_max + f_min; 209 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 210 + if ((mxcsr & EXC) != PE) { 211 + printf("FAIL: float add inexact\n"); 212 + ret = 1; 213 + } 214 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 215 + f_res = f_inf + f_ninf; 216 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 217 + if ((mxcsr & EXC) != IE) { 218 + printf("FAIL: float add inf -inf\n"); 219 + ret = 1; 220 + } 221 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 222 + f_res = f_snan + f_third; 223 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 224 + if ((mxcsr & EXC) != IE) { 225 + printf("FAIL: float add snan\n"); 226 + ret = 1; 227 + } 228 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 229 + f_res = f_true_min + f_true_min; 230 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 231 + if ((mxcsr & EXC) != (UE | PE)) { 232 + printf("FAIL: float add FTZ underflow\n"); 233 + ret = 1; 234 + } 235 + 236 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 237 + d_res = d_max + d_max; 238 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 239 + if ((mxcsr & EXC) != (OE | PE)) { 240 + printf("FAIL: double add overflow\n"); 241 + ret = 1; 242 + } 243 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 244 + d_res = d_max + d_min; 245 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 246 + if ((mxcsr & EXC) != PE) { 247 + printf("FAIL: double add inexact\n"); 248 + ret = 1; 249 + } 250 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 251 + d_res = d_inf + d_ninf; 252 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 253 + if ((mxcsr & EXC) != IE) { 254 + printf("FAIL: double add inf -inf\n"); 255 + ret = 1; 256 + } 257 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 258 + d_res = d_snan + d_third; 259 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 260 + if ((mxcsr & EXC) != IE) { 261 + printf("FAIL: double add snan\n"); 262 + ret = 1; 263 + } 264 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 265 + d_res = d_true_min + d_true_min; 266 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 267 + if ((mxcsr & EXC) != (UE | PE)) { 268 + printf("FAIL: double add FTZ underflow\n"); 269 + ret = 1; 270 + } 271 + 272 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 273 + f_res = f_max - f_nmax; 274 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 275 + if ((mxcsr & EXC) != (OE | PE)) { 276 + printf("FAIL: float sub overflow\n"); 277 + ret = 1; 278 + } 279 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 280 + f_res = f_max - f_min; 281 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 282 + if ((mxcsr & EXC) != PE) { 283 + printf("FAIL: float sub inexact\n"); 284 + ret = 1; 285 + } 286 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 287 + f_res = f_inf - f_inf; 288 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 289 + if ((mxcsr & EXC) != IE) { 290 + printf("FAIL: float sub inf inf\n"); 291 + ret = 1; 292 + } 293 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 294 + f_res = f_snan - f_third; 295 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 296 + if ((mxcsr & EXC) != IE) { 297 + printf("FAIL: float sub snan\n"); 298 + ret = 1; 299 + } 300 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 301 + f_res = f_min - f_true_min; 302 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 303 + if ((mxcsr & EXC) != (UE | PE)) { 304 + printf("FAIL: float sub FTZ underflow\n"); 305 + ret = 1; 306 + } 307 + 308 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 309 + d_res = d_max - d_nmax; 310 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 311 + if ((mxcsr & EXC) != (OE | PE)) { 312 + printf("FAIL: double sub overflow\n"); 313 + ret = 1; 314 + } 315 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 316 + d_res = d_max - d_min; 317 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 318 + if ((mxcsr & EXC) != PE) { 319 + printf("FAIL: double sub inexact\n"); 320 + ret = 1; 321 + } 322 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 323 + d_res = d_inf - d_inf; 324 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 325 + if ((mxcsr & EXC) != IE) { 326 + printf("FAIL: double sub inf inf\n"); 327 + ret = 1; 328 + } 329 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 330 + d_res = d_snan - d_third; 331 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 332 + if ((mxcsr & EXC) != IE) { 333 + printf("FAIL: double sub snan\n"); 334 + ret = 1; 335 + } 336 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 337 + d_res = d_min - d_true_min; 338 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 339 + if ((mxcsr & EXC) != (UE | PE)) { 340 + printf("FAIL: double sub FTZ underflow\n"); 341 + ret = 1; 342 + } 343 + 344 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 345 + f_res = f_max * f_max; 346 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 347 + if ((mxcsr & EXC) != (OE | PE)) { 348 + printf("FAIL: float mul overflow\n"); 349 + ret = 1; 350 + } 351 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 352 + f_res = f_third * f_third; 353 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 354 + if ((mxcsr & EXC) != PE) { 355 + printf("FAIL: float mul inexact\n"); 356 + ret = 1; 357 + } 358 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 359 + f_res = f_min * f_min; 360 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 361 + if ((mxcsr & EXC) != (UE | PE)) { 362 + printf("FAIL: float mul underflow\n"); 363 + ret = 1; 364 + } 365 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 366 + f_res = f_inf * f_zero; 367 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 368 + if ((mxcsr & EXC) != IE) { 369 + printf("FAIL: float mul inf 0\n"); 370 + ret = 1; 371 + } 372 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 373 + f_res = f_snan * f_third; 374 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 375 + if ((mxcsr & EXC) != IE) { 376 + printf("FAIL: float mul snan\n"); 377 + ret = 1; 378 + } 379 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 380 + f_res = f_min * f_half; 381 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 382 + if ((mxcsr & EXC) != (UE | PE)) { 383 + printf("FAIL: float mul FTZ underflow\n"); 384 + ret = 1; 385 + } 386 + 387 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 388 + d_res = d_max * d_max; 389 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 390 + if ((mxcsr & EXC) != (OE | PE)) { 391 + printf("FAIL: double mul overflow\n"); 392 + ret = 1; 393 + } 394 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 395 + d_res = d_third * d_third; 396 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 397 + if ((mxcsr & EXC) != PE) { 398 + printf("FAIL: double mul inexact\n"); 399 + ret = 1; 400 + } 401 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 402 + d_res = d_min * d_min; 403 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 404 + if ((mxcsr & EXC) != (UE | PE)) { 405 + printf("FAIL: double mul underflow\n"); 406 + ret = 1; 407 + } 408 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 409 + d_res = d_inf * d_zero; 410 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 411 + if ((mxcsr & EXC) != IE) { 412 + printf("FAIL: double mul inf 0\n"); 413 + ret = 1; 414 + } 415 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 416 + d_res = d_snan * d_third; 417 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 418 + if ((mxcsr & EXC) != IE) { 419 + printf("FAIL: double mul snan\n"); 420 + ret = 1; 421 + } 422 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 423 + d_res = d_min * d_half; 424 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 425 + if ((mxcsr & EXC) != (UE | PE)) { 426 + printf("FAIL: double mul FTZ underflow\n"); 427 + ret = 1; 428 + } 429 + 430 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 431 + f_res = f_max / f_min; 432 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 433 + if ((mxcsr & EXC) != (OE | PE)) { 434 + printf("FAIL: float div overflow\n"); 435 + ret = 1; 436 + } 437 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 438 + f_res = f_one / f_third; 439 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 440 + if ((mxcsr & EXC) != PE) { 441 + printf("FAIL: float div inexact\n"); 442 + ret = 1; 443 + } 444 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 445 + f_res = f_min / f_max; 446 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 447 + if ((mxcsr & EXC) != (UE | PE)) { 448 + printf("FAIL: float div underflow\n"); 449 + ret = 1; 450 + } 451 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 452 + f_res = f_one / f_zero; 453 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 454 + if ((mxcsr & EXC) != ZE) { 455 + printf("FAIL: float div 1 0\n"); 456 + ret = 1; 457 + } 458 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 459 + f_res = f_inf / f_zero; 460 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 461 + if ((mxcsr & EXC) != 0) { 462 + printf("FAIL: float div inf 0\n"); 463 + ret = 1; 464 + } 465 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 466 + f_res = f_nan / f_zero; 467 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 468 + if ((mxcsr & EXC) != 0) { 469 + printf("FAIL: float div nan 0\n"); 470 + ret = 1; 471 + } 472 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 473 + f_res = f_zero / f_zero; 474 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 475 + if ((mxcsr & EXC) != IE) { 476 + printf("FAIL: float div 0 0\n"); 477 + ret = 1; 478 + } 479 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 480 + f_res = f_inf / f_inf; 481 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 482 + if ((mxcsr & EXC) != IE) { 483 + printf("FAIL: float div inf inf\n"); 484 + ret = 1; 485 + } 486 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 487 + f_res = f_snan / f_third; 488 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 489 + if ((mxcsr & EXC) != IE) { 490 + printf("FAIL: float div snan\n"); 491 + ret = 1; 492 + } 493 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 494 + f_res = f_min / f_two; 495 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 496 + if ((mxcsr & EXC) != (UE | PE)) { 497 + printf("FAIL: float div FTZ underflow\n"); 498 + ret = 1; 499 + } 500 + 501 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 502 + d_res = d_max / d_min; 503 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 504 + if ((mxcsr & EXC) != (OE | PE)) { 505 + printf("FAIL: double div overflow\n"); 506 + ret = 1; 507 + } 508 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 509 + d_res = d_one / d_third; 510 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 511 + if ((mxcsr & EXC) != PE) { 512 + printf("FAIL: double div inexact\n"); 513 + ret = 1; 514 + } 515 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 516 + d_res = d_min / d_max; 517 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 518 + if ((mxcsr & EXC) != (UE | PE)) { 519 + printf("FAIL: double div underflow\n"); 520 + ret = 1; 521 + } 522 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 523 + d_res = d_one / d_zero; 524 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 525 + if ((mxcsr & EXC) != ZE) { 526 + printf("FAIL: double div 1 0\n"); 527 + ret = 1; 528 + } 529 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 530 + d_res = d_inf / d_zero; 531 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 532 + if ((mxcsr & EXC) != 0) { 533 + printf("FAIL: double div inf 0\n"); 534 + ret = 1; 535 + } 536 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 537 + d_res = d_nan / d_zero; 538 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 539 + if ((mxcsr & EXC) != 0) { 540 + printf("FAIL: double div nan 0\n"); 541 + ret = 1; 542 + } 543 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 544 + d_res = d_zero / d_zero; 545 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 546 + if ((mxcsr & EXC) != IE) { 547 + printf("FAIL: double div 0 0\n"); 548 + ret = 1; 549 + } 550 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 551 + d_res = d_inf / d_inf; 552 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 553 + if ((mxcsr & EXC) != IE) { 554 + printf("FAIL: double div inf inf\n"); 555 + ret = 1; 556 + } 557 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 558 + d_res = d_snan / d_third; 559 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 560 + if ((mxcsr & EXC) != IE) { 561 + printf("FAIL: double div snan\n"); 562 + ret = 1; 563 + } 564 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_ftz)); 565 + d_res = d_min / d_two; 566 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 567 + if ((mxcsr & EXC) != (UE | PE)) { 568 + printf("FAIL: double div FTZ underflow\n"); 569 + ret = 1; 570 + } 571 + 572 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 573 + __asm__ volatile ("sqrtss %0, %0" : "=x" (f_res) : "0" (f_max)); 574 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 575 + if ((mxcsr & EXC) != PE) { 576 + printf("FAIL: sqrtss inexact\n"); 577 + ret = 1; 578 + } 579 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 580 + __asm__ volatile ("sqrtss %0, %0" : "=x" (f_res) : "0" (f_nmax)); 581 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 582 + if ((mxcsr & EXC) != IE) { 583 + printf("FAIL: sqrtss -max\n"); 584 + ret = 1; 585 + } 586 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 587 + __asm__ volatile ("sqrtss %0, %0" : "=x" (f_res) : "0" (f_ninf)); 588 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 589 + if ((mxcsr & EXC) != IE) { 590 + printf("FAIL: sqrtss -inf\n"); 591 + ret = 1; 592 + } 593 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 594 + __asm__ volatile ("sqrtss %0, %0" : "=x" (f_res) : "0" (f_snan)); 595 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 596 + if ((mxcsr & EXC) != IE) { 597 + printf("FAIL: sqrtss snan\n"); 598 + ret = 1; 599 + } 600 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 601 + __asm__ volatile ("sqrtss %0, %0" : "=x" (f_res) : "0" (f_nzero)); 602 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 603 + if ((mxcsr & EXC) != 0) { 604 + printf("FAIL: sqrtss -0\n"); 605 + ret = 1; 606 + } 607 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 608 + __asm__ volatile ("sqrtss %0, %0" : "=x" (f_res) : 609 + "0" (-__builtin_nanf(""))); 610 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 611 + if ((mxcsr & EXC) != 0) { 612 + printf("FAIL: sqrtss -nan\n"); 613 + ret = 1; 614 + } 615 + 616 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 617 + __asm__ volatile ("sqrtsd %0, %0" : "=x" (d_res) : "0" (d_max)); 618 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 619 + if ((mxcsr & EXC) != PE) { 620 + printf("FAIL: sqrtsd inexact\n"); 621 + ret = 1; 622 + } 623 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 624 + __asm__ volatile ("sqrtsd %0, %0" : "=x" (d_res) : "0" (d_nmax)); 625 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 626 + if ((mxcsr & EXC) != IE) { 627 + printf("FAIL: sqrtsd -max\n"); 628 + ret = 1; 629 + } 630 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 631 + __asm__ volatile ("sqrtsd %0, %0" : "=x" (d_res) : "0" (d_ninf)); 632 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 633 + if ((mxcsr & EXC) != IE) { 634 + printf("FAIL: sqrtsd -inf\n"); 635 + ret = 1; 636 + } 637 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 638 + __asm__ volatile ("sqrtsd %0, %0" : "=x" (d_res) : "0" (d_snan)); 639 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 640 + if ((mxcsr & EXC) != IE) { 641 + printf("FAIL: sqrtsd snan\n"); 642 + ret = 1; 643 + } 644 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 645 + __asm__ volatile ("sqrtsd %0, %0" : "=x" (d_res) : "0" (d_nzero)); 646 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 647 + if ((mxcsr & EXC) != 0) { 648 + printf("FAIL: sqrtsd -0\n"); 649 + ret = 1; 650 + } 651 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 652 + __asm__ volatile ("sqrtsd %0, %0" : "=x" (d_res) : 653 + "0" (-__builtin_nan(""))); 654 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 655 + if ((mxcsr & EXC) != 0) { 656 + printf("FAIL: sqrtsd -nan\n"); 657 + ret = 1; 658 + } 659 + 660 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 661 + __asm__ volatile ("maxss %1, %0" : : "x" (f_nan), "x" (f_zero)); 662 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 663 + if ((mxcsr & EXC) != IE) { 664 + printf("FAIL: maxss nan\n"); 665 + ret = 1; 666 + } 667 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 668 + __asm__ volatile ("minss %1, %0" : : "x" (f_nan), "x" (f_zero)); 669 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 670 + if ((mxcsr & EXC) != IE) { 671 + printf("FAIL: minss nan\n"); 672 + ret = 1; 673 + } 674 + 675 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 676 + __asm__ volatile ("maxsd %1, %0" : : "x" (d_nan), "x" (d_zero)); 677 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 678 + if ((mxcsr & EXC) != IE) { 679 + printf("FAIL: maxsd nan\n"); 680 + ret = 1; 681 + } 682 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 683 + __asm__ volatile ("minsd %1, %0" : : "x" (d_nan), "x" (d_zero)); 684 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 685 + if ((mxcsr & EXC) != IE) { 686 + printf("FAIL: minsd nan\n"); 687 + ret = 1; 688 + } 689 + 690 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 691 + __asm__ volatile ("cvtsi2ss %1, %0" : "=x" (f_res) : "m" (i32_max)); 692 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 693 + if ((mxcsr & EXC) != PE) { 694 + printf("FAIL: cvtsi2ss inexact\n"); 695 + ret = 1; 696 + } 697 + 698 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 699 + __asm__ volatile ("cvtsi2sd %1, %0" : "=x" (d_res) : "m" (i32_max)); 700 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 701 + if ((mxcsr & EXC) != 0) { 702 + printf("FAIL: cvtsi2sd exact\n"); 703 + ret = 1; 704 + } 705 + 706 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 707 + __asm__ volatile ("cvtss2si %1, %0" : "=r" (i32_res) : "x" (1.5f)); 708 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 709 + if ((mxcsr & EXC) != PE) { 710 + printf("FAIL: cvtss2si inexact\n"); 711 + ret = 1; 712 + } 713 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 714 + __asm__ volatile ("cvtss2si %1, %0" : "=r" (i32_res) : "x" (0x1p31f)); 715 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 716 + if ((mxcsr & EXC) != IE) { 717 + printf("FAIL: cvtss2si 0x1p31\n"); 718 + ret = 1; 719 + } 720 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 721 + __asm__ volatile ("cvtss2si %1, %0" : "=r" (i32_res) : "x" (f_inf)); 722 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 723 + if ((mxcsr & EXC) != IE) { 724 + printf("FAIL: cvtss2si inf\n"); 725 + ret = 1; 726 + } 727 + 728 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 729 + __asm__ volatile ("cvtsd2si %1, %0" : "=r" (i32_res) : "x" (1.5)); 730 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 731 + if ((mxcsr & EXC) != PE) { 732 + printf("FAIL: cvtsd2si inexact\n"); 733 + ret = 1; 734 + } 735 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 736 + __asm__ volatile ("cvtsd2si %1, %0" : "=r" (i32_res) : "x" (0x1p31)); 737 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 738 + if ((mxcsr & EXC) != IE) { 739 + printf("FAIL: cvtsd2si 0x1p31\n"); 740 + ret = 1; 741 + } 742 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 743 + __asm__ volatile ("cvtsd2si %1, %0" : "=r" (i32_res) : "x" (d_inf)); 744 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 745 + if ((mxcsr & EXC) != IE) { 746 + printf("FAIL: cvtsd2si inf\n"); 747 + ret = 1; 748 + } 749 + 750 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 751 + __asm__ volatile ("cvttss2si %1, %0" : "=r" (i32_res) : "x" (1.5f)); 752 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 753 + if ((mxcsr & EXC) != PE) { 754 + printf("FAIL: cvttss2si inexact\n"); 755 + ret = 1; 756 + } 757 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 758 + __asm__ volatile ("cvttss2si %1, %0" : "=r" (i32_res) : "x" (0x1p31f)); 759 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 760 + if ((mxcsr & EXC) != IE) { 761 + printf("FAIL: cvttss2si 0x1p31\n"); 762 + ret = 1; 763 + } 764 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 765 + __asm__ volatile ("cvttss2si %1, %0" : "=r" (i32_res) : "x" (f_inf)); 766 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 767 + if ((mxcsr & EXC) != IE) { 768 + printf("FAIL: cvttss2si inf\n"); 769 + ret = 1; 770 + } 771 + 772 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 773 + __asm__ volatile ("cvttsd2si %1, %0" : "=r" (i32_res) : "x" (1.5)); 774 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 775 + if ((mxcsr & EXC) != PE) { 776 + printf("FAIL: cvttsd2si inexact\n"); 777 + ret = 1; 778 + } 779 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 780 + __asm__ volatile ("cvttsd2si %1, %0" : "=r" (i32_res) : "x" (0x1p31)); 781 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 782 + if ((mxcsr & EXC) != IE) { 783 + printf("FAIL: cvttsd2si 0x1p31\n"); 784 + ret = 1; 785 + } 786 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 787 + __asm__ volatile ("cvttsd2si %1, %0" : "=r" (i32_res) : "x" (d_inf)); 788 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 789 + if ((mxcsr & EXC) != IE) { 790 + printf("FAIL: cvttsd2si inf\n"); 791 + ret = 1; 792 + } 793 + 794 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 795 + __asm__ volatile ("rcpss %0, %0" : "=x" (f_res) : "0" (f_snan)); 796 + f_res += f_one; 797 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 798 + if ((mxcsr & EXC) != 0) { 799 + printf("FAIL: rcpss snan\n"); 800 + ret = 1; 801 + } 802 + 803 + __asm__ volatile ("ldmxcsr %0" : : "m" (mxcsr_default)); 804 + __asm__ volatile ("rsqrtss %0, %0" : "=x" (f_res) : "0" (f_snan)); 805 + f_res += f_one; 806 + __asm__ volatile ("stmxcsr %0" : "=m" (mxcsr)); 807 + if ((mxcsr & EXC) != 0) { 808 + printf("FAIL: rsqrtss snan\n"); 809 + ret = 1; 810 + } 811 + 812 + return ret; 813 + }
+1
ui/cocoa.m
··· 32 32 #include "ui/input.h" 33 33 #include "sysemu/sysemu.h" 34 34 #include "sysemu/runstate.h" 35 + #include "sysemu/cpu-throttle.h" 35 36 #include "qapi/error.h" 36 37 #include "qapi/qapi-commands-block.h" 37 38 #include "qapi/qapi-commands-misc.h"
+7
util/qemu-error.c
··· 26 26 27 27 /* Prepend timestamp to messages */ 28 28 bool error_with_timestamp; 29 + bool error_with_guestname; 30 + const char *error_guest_name; 29 31 30 32 int error_printf(const char *fmt, ...) 31 33 { ··· 211 213 timestr = g_time_val_to_iso8601(&tv); 212 214 error_printf("%s ", timestr); 213 215 g_free(timestr); 216 + } 217 + 218 + /* Only prepend guest name if -msg guest-name and -name guest=... are set */ 219 + if (error_with_guestname && error_guest_name && !cur_mon) { 220 + error_printf("%s ", error_guest_name); 214 221 } 215 222 216 223 print_loc();