qemu with hax to log dma reads & writes jcs.org/2018/11/12/vfio

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20200430-1' into staging

target-arm queue:
* xlnx-zdma: Fix endianness handling of descriptor loading
* nrf51: Fix last GPIO CNF address
* gicv3: Use gicr_typer in arm_gicv3_icc_reset
* msf2: Add EMAC block to SmartFusion2 SoC
* New clock modelling framework
* hw/arm: versal: Setup the ADMA with 128bit bus-width
* Cadence: gem: fix wraparound in 64bit descriptors
* cadence_gem: clear RX control descriptor
* target/arm: Vectorize integer comparison vs zero
* hw/arm/virt: dt: add kaslr-seed property
* hw/arm: xlnx-zcu102: Disable unsupported FDT firmware nodes

# gpg: Signature made Thu 30 Apr 2020 15:43:54 BST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20200430-1: (30 commits)
hw/arm: xlnx-zcu102: Disable unsupported FDT firmware nodes
hw/arm: xlnx-zcu102: Move arm_boot_info into XlnxZCU102
device_tree: Constify compat in qemu_fdt_node_path()
device_tree: Allow name wildcards in qemu_fdt_node_path()
target/arm/cpu: Update coding style to make checkpatch.pl happy
target/arm: Make cpu_register() available for other files
target/arm: Restrict the Address Translate write operation to TCG accel
hw/arm/virt: dt: add kaslr-seed property
hw/arm/virt: dt: move creation of /secure-chosen to create_fdt()
target/arm: Vectorize integer comparison vs zero
net: cadence_gem: clear RX control descriptor
Cadence: gem: fix wraparound in 64bit descriptors
hw/arm: versal: Setup the ADMA with 128bit bus-width
qdev-monitor: print the device's clock with info qtree
hw/arm/xilinx_zynq: connect uart clocks to slcr
hw/char/cadence_uart: add clock support
hw/misc/zynq_slcr: add clock generation for uarts
docs/clocks: add device's clock documentation
qdev-clock: introduce an init array to ease the device construction
qdev: add clock input&output support to devices.
...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

+2533 -193
+2
MAINTAINERS
··· 921 921 F: include/hw/misc/msf2-sysreg.h 922 922 F: include/hw/timer/mss-timer.h 923 923 F: include/hw/ssi/mss-spi.h 924 + F: hw/net/msf2-emac.c 925 + F: include/hw/net/msf2-emac.h 924 926 925 927 Emcraft M2S-FG484 926 928 M: Subbaraya Sundeep <sundeep.lkml@gmail.com>
+2 -2
device_tree.c
··· 291 291 return path_array; 292 292 } 293 293 294 - char **qemu_fdt_node_path(void *fdt, const char *name, char *compat, 294 + char **qemu_fdt_node_path(void *fdt, const char *name, const char *compat, 295 295 Error **errp) 296 296 { 297 297 int offset, len, ret; ··· 308 308 offset = len; 309 309 break; 310 310 } 311 - if (!strcmp(iter_name, name)) { 311 + if (!name || !strcmp(iter_name, name)) { 312 312 char *path; 313 313 314 314 path = g_malloc(path_len);
+391
docs/devel/clocks.rst
··· 1 + Modelling a clock tree in QEMU 2 + ============================== 3 + 4 + What are clocks? 5 + ---------------- 6 + 7 + Clocks are QOM objects developed for the purpose of modelling the 8 + distribution of clocks in QEMU. 9 + 10 + They allow us to model the clock distribution of a platform and detect 11 + configuration errors in the clock tree such as badly configured PLL, clock 12 + source selection or disabled clock. 13 + 14 + The object is *Clock* and its QOM name is ``clock`` (in C code, the macro 15 + ``TYPE_CLOCK``). 16 + 17 + Clocks are typically used with devices where they are used to model inputs 18 + and outputs. They are created in a similar way to GPIOs. Inputs and outputs 19 + of different devices can be connected together. 20 + 21 + In these cases a Clock object is a child of a Device object, but this 22 + is not a requirement. Clocks can be independent of devices. For 23 + example it is possible to create a clock outside of any device to 24 + model the main clock source of a machine. 25 + 26 + Here is an example of clocks:: 27 + 28 + +---------+ +----------------------+ +--------------+ 29 + | Clock 1 | | Device B | | Device C | 30 + | | | +-------+ +-------+ | | +-------+ | 31 + | |>>-+-->>|Clock 2| |Clock 3|>>--->>|Clock 6| | 32 + +---------+ | | | (in) | | (out) | | | | (in) | | 33 + | | +-------+ +-------+ | | +-------+ | 34 + | | +-------+ | +--------------+ 35 + | | |Clock 4|>> 36 + | | | (out) | | +--------------+ 37 + | | +-------+ | | Device D | 38 + | | +-------+ | | +-------+ | 39 + | | |Clock 5|>>--->>|Clock 7| | 40 + | | | (out) | | | | (in) | | 41 + | | +-------+ | | +-------+ | 42 + | +----------------------+ | | 43 + | | +-------+ | 44 + +----------------------------->>|Clock 8| | 45 + | | (in) | | 46 + | +-------+ | 47 + +--------------+ 48 + 49 + Clocks are defined in the ``include/hw/clock.h`` header and device 50 + related functions are defined in the ``include/hw/qdev-clock.h`` 51 + header. 52 + 53 + The clock state 54 + --------------- 55 + 56 + The state of a clock is its period; it is stored as an integer 57 + representing it in units of 2 :sup:`-32` ns. The special value of 0 is used to 58 + represent the clock being inactive or gated. The clocks do not model 59 + the signal itself (pin toggling) or other properties such as the duty 60 + cycle. 61 + 62 + All clocks contain this state: outputs as well as inputs. This allows 63 + the current period of a clock to be fetched at any time. When a clock 64 + is updated, the value is immediately propagated to all connected 65 + clocks in the tree. 66 + 67 + To ease interaction with clocks, helpers with a unit suffix are defined for 68 + every clock state setter or getter. The suffixes are: 69 + 70 + - ``_ns`` for handling periods in nanoseconds 71 + - ``_hz`` for handling frequencies in hertz 72 + 73 + The 0 period value is converted to 0 in hertz and vice versa. 0 always means 74 + that the clock is disabled. 75 + 76 + Adding a new clock 77 + ------------------ 78 + 79 + Adding clocks to a device must be done during the init method of the Device 80 + instance. 81 + 82 + To add an input clock to a device, the function ``qdev_init_clock_in()`` 83 + must be used. It takes the name, a callback and an opaque parameter 84 + for the callback (this will be explained in a following section). 85 + Output is simpler; only the name is required. Typically:: 86 + 87 + qdev_init_clock_in(DEVICE(dev), "clk_in", clk_in_callback, dev); 88 + qdev_init_clock_out(DEVICE(dev), "clk_out"); 89 + 90 + Both functions return the created Clock pointer, which should be saved in the 91 + device's state structure for further use. 92 + 93 + These objects will be automatically deleted by the QOM reference mechanism. 94 + 95 + Note that it is possible to create a static array describing clock inputs and 96 + outputs. The function ``qdev_init_clocks()`` must be called with the array as 97 + parameter to initialize the clocks: it has the same behaviour as calling the 98 + ``qdev_init_clock_in/out()`` for each clock in the array. To ease the array 99 + construction, some macros are defined in ``include/hw/qdev-clock.h``. 100 + As an example, the following creates 2 clocks to a device: one input and one 101 + output. 102 + 103 + .. code-block:: c 104 + 105 + /* device structure containing pointers to the clock objects */ 106 + typedef struct MyDeviceState { 107 + DeviceState parent_obj; 108 + Clock *clk_in; 109 + Clock *clk_out; 110 + } MyDeviceState; 111 + 112 + /* 113 + * callback for the input clock (see "Callback on input clock 114 + * change" section below for more information). 115 + */ 116 + static void clk_in_callback(void *opaque); 117 + 118 + /* 119 + * static array describing clocks: 120 + * + a clock input named "clk_in", whose pointer is stored in 121 + * the clk_in field of a MyDeviceState structure with callback 122 + * clk_in_callback. 123 + * + a clock output named "clk_out" whose pointer is stored in 124 + * the clk_out field of a MyDeviceState structure. 125 + */ 126 + static const ClockPortInitArray mydev_clocks = { 127 + QDEV_CLOCK_IN(MyDeviceState, clk_in, clk_in_callback), 128 + QDEV_CLOCK_OUT(MyDeviceState, clk_out), 129 + QDEV_CLOCK_END 130 + }; 131 + 132 + /* device initialization function */ 133 + static void mydev_init(Object *obj) 134 + { 135 + /* cast to MyDeviceState */ 136 + MyDeviceState *mydev = MYDEVICE(obj); 137 + /* create and fill the pointer fields in the MyDeviceState */ 138 + qdev_init_clocks(mydev, mydev_clocks); 139 + [...] 140 + } 141 + 142 + An alternative way to create a clock is to simply call 143 + ``object_new(TYPE_CLOCK)``. In that case the clock will neither be an 144 + input nor an output of a device. After the whole QOM hierarchy of the 145 + clock has been set ``clock_setup_canonical_path()`` should be called. 146 + 147 + At creation, the period of the clock is 0: the clock is disabled. You can 148 + change it using ``clock_set_ns()`` or ``clock_set_hz()``. 149 + 150 + Note that if you are creating a clock with a fixed period which will never 151 + change (for example the main clock source of a board), then you'll have 152 + nothing else to do. This value will be propagated to other clocks when 153 + connecting the clocks together and devices will fetch the right value during 154 + the first reset. 155 + 156 + Retrieving clocks from a device 157 + ------------------------------- 158 + 159 + ``qdev_get_clock_in()`` and ``dev_get_clock_out()`` are available to 160 + get the clock inputs or outputs of a device. For example: 161 + 162 + .. code-block:: c 163 + 164 + Clock *clk = qdev_get_clock_in(DEVICE(mydev), "clk_in"); 165 + 166 + or: 167 + 168 + .. code-block:: c 169 + 170 + Clock *clk = qdev_get_clock_out(DEVICE(mydev), "clk_out"); 171 + 172 + Connecting two clocks together 173 + ------------------------------ 174 + 175 + To connect two clocks together, use the ``clock_set_source()`` function. 176 + Given two clocks ``clk1``, and ``clk2``, ``clock_set_source(clk2, clk1);`` 177 + configures ``clk2`` to follow the ``clk1`` period changes. Every time ``clk1`` 178 + is updated, ``clk2`` will be updated too. 179 + 180 + When connecting clock between devices, prefer using the 181 + ``qdev_connect_clock_in()`` function to set the source of an input 182 + device clock. For example, to connect the input clock ``clk2`` of 183 + ``devB`` to the output clock ``clk1`` of ``devA``, do: 184 + 185 + .. code-block:: c 186 + 187 + qdev_connect_clock_in(devB, "clk2", qdev_get_clock_out(devA, "clk1")) 188 + 189 + We used ``qdev_get_clock_out()`` above, but any clock can drive an 190 + input clock, even another input clock. The following diagram shows 191 + some examples of connections. Note also that a clock can drive several 192 + other clocks. 193 + 194 + :: 195 + 196 + +------------+ +--------------------------------------------------+ 197 + | Device A | | Device B | 198 + | | | +---------------------+ | 199 + | | | | Device C | | 200 + | +-------+ | | +-------+ | +-------+ +-------+ | +-------+ | 201 + | |Clock 1|>>-->>|Clock 2|>>+-->>|Clock 3| |Clock 5|>>>>|Clock 6|>> 202 + | | (out) | | | | (in) | | | | (in) | | (out) | | | (out) | | 203 + | +-------+ | | +-------+ | | +-------+ +-------+ | +-------+ | 204 + +------------+ | | +---------------------+ | 205 + | | | 206 + | | +--------------+ | 207 + | | | Device D | | 208 + | | | +-------+ | | 209 + | +-->>|Clock 4| | | 210 + | | | (in) | | | 211 + | | +-------+ | | 212 + | +--------------+ | 213 + +--------------------------------------------------+ 214 + 215 + In the above example, when *Clock 1* is updated by *Device A*, three 216 + clocks get the new clock period value: *Clock 2*, *Clock 3* and *Clock 4*. 217 + 218 + It is not possible to disconnect a clock or to change the clock connection 219 + after it is connected. 220 + 221 + Unconnected input clocks 222 + ------------------------ 223 + 224 + A newly created input clock is disabled (period of 0). This means the 225 + clock will be considered as disabled until the period is updated. If 226 + the clock remains unconnected it will always keep its initial value 227 + of 0. If this is not the desired behaviour, ``clock_set()``, 228 + ``clock_set_ns()`` or ``clock_set_hz()`` should be called on the Clock 229 + object during device instance init. For example: 230 + 231 + .. code-block:: c 232 + 233 + clk = qdev_init_clock_in(DEVICE(dev), "clk-in", clk_in_callback, 234 + dev); 235 + /* set initial value to 10ns / 100MHz */ 236 + clock_set_ns(clk, 10); 237 + 238 + Fetching clock frequency/period 239 + ------------------------------- 240 + 241 + To get the current state of a clock, use the functions ``clock_get()``, 242 + ``clock_get_ns()`` or ``clock_get_hz()``. 243 + 244 + It is also possible to register a callback on clock frequency changes. 245 + Here is an example: 246 + 247 + .. code-block:: c 248 + 249 + void clock_callback(void *opaque) { 250 + MyDeviceState *s = (MyDeviceState *) opaque; 251 + /* 252 + * 'opaque' is the argument passed to qdev_init_clock_in(); 253 + * usually this will be the device state pointer. 254 + */ 255 + 256 + /* do something with the new period */ 257 + fprintf(stdout, "device new period is %" PRIu64 "ns\n", 258 + clock_get_ns(dev->my_clk_input)); 259 + } 260 + 261 + Changing a clock period 262 + ----------------------- 263 + 264 + A device can change its outputs using the ``clock_update()``, 265 + ``clock_update_ns()`` or ``clock_update_hz()`` function. It will trigger 266 + updates on every connected input. 267 + 268 + For example, let's say that we have an output clock *clkout* and we 269 + have a pointer to it in the device state because we did the following 270 + in init phase: 271 + 272 + .. code-block:: c 273 + 274 + dev->clkout = qdev_init_clock_out(DEVICE(dev), "clkout"); 275 + 276 + Then at any time (apart from the cases listed below), it is possible to 277 + change the clock value by doing: 278 + 279 + .. code-block:: c 280 + 281 + clock_update_hz(dev->clkout, 1000 * 1000 * 1000); /* 1GHz */ 282 + 283 + Because updating a clock may trigger any side effects through 284 + connected clocks and their callbacks, this operation must be done 285 + while holding the qemu io lock. 286 + 287 + For the same reason, one can update clocks only when it is allowed to have 288 + side effects on other objects. In consequence, it is forbidden: 289 + 290 + * during migration, 291 + * and in the enter phase of reset. 292 + 293 + Note that calling ``clock_update[_ns|_hz]()`` is equivalent to calling 294 + ``clock_set[_ns|_hz]()`` (with the same arguments) then 295 + ``clock_propagate()`` on the clock. Thus, setting the clock value can 296 + be separated from triggering the side-effects. This is often required 297 + to factorize code to handle reset and migration in devices. 298 + 299 + Aliasing clocks 300 + --------------- 301 + 302 + Sometimes, one needs to forward, or inherit, a clock from another 303 + device. Typically, when doing device composition, a device might 304 + expose a sub-device's clock without interfering with it. The function 305 + ``qdev_alias_clock()`` can be used to achieve this behaviour. Note 306 + that it is possible to expose the clock under a different name. 307 + ``qdev_alias_clock()`` works for both input and output clocks. 308 + 309 + For example, if device B is a child of device A, 310 + ``device_a_instance_init()`` may do something like this: 311 + 312 + .. code-block:: c 313 + 314 + void device_a_instance_init(Object *obj) 315 + { 316 + AState *A = DEVICE_A(obj); 317 + BState *B; 318 + /* create object B as child of A */ 319 + [...] 320 + qdev_alias_clock(B, "clk", A, "b_clk"); 321 + /* 322 + * Now A has a clock "b_clk" which is an alias to 323 + * the clock "clk" of its child B. 324 + */ 325 + } 326 + 327 + This function does not return any clock object. The new clock has the 328 + same direction (input or output) as the original one. This function 329 + only adds a link to the existing clock. In the above example, object B 330 + remains the only object allowed to use the clock and device A must not 331 + try to change the clock period or set a callback to the clock. This 332 + diagram describes the example with an input clock:: 333 + 334 + +--------------------------+ 335 + | Device A | 336 + | +--------------+ | 337 + | | Device B | | 338 + | | +-------+ | | 339 + >>"b_clk">>>| "clk" | | | 340 + | (in) | | (in) | | | 341 + | | +-------+ | | 342 + | +--------------+ | 343 + +--------------------------+ 344 + 345 + Migration 346 + --------- 347 + 348 + Clock state is not migrated automatically. Every device must handle its 349 + clock migration. Alias clocks must not be migrated. 350 + 351 + To ensure clock states are restored correctly during migration, there 352 + are two solutions. 353 + 354 + Clock states can be migrated by adding an entry into the device 355 + vmstate description. You should use the ``VMSTATE_CLOCK`` macro for this. 356 + This is typically used to migrate an input clock state. For example: 357 + 358 + .. code-block:: c 359 + 360 + MyDeviceState { 361 + DeviceState parent_obj; 362 + [...] /* some fields */ 363 + Clock *clk; 364 + }; 365 + 366 + VMStateDescription my_device_vmstate = { 367 + .name = "my_device", 368 + .fields = (VMStateField[]) { 369 + [...], /* other migrated fields */ 370 + VMSTATE_CLOCK(clk, MyDeviceState), 371 + VMSTATE_END_OF_LIST() 372 + } 373 + }; 374 + 375 + The second solution is to restore the clock state using information already 376 + at our disposal. This can be used to restore output clock states using the 377 + device state. The functions ``clock_set[_ns|_hz]()`` can be used during the 378 + ``post_load()`` migration callback. 379 + 380 + When adding clock support to an existing device, if you care about 381 + migration compatibility you will need to be careful, as simply adding 382 + a ``VMSTATE_CLOCK()`` line will break compatibility. Instead, you can 383 + put the ``VMSTATE_CLOCK()`` line into a vmstate subsection with a 384 + suitable ``needed`` function, and use ``clock_set()`` in a 385 + ``pre_load()`` function to set the default value that will be used if 386 + the source virtual machine in the migration does not send the clock 387 + state. 388 + 389 + Care should be taken not to use ``clock_update[_ns|_hz]()`` or 390 + ``clock_propagate()`` during the whole migration procedure because it 391 + will trigger side effects to other devices in an unknown state.
+1
docs/devel/index.rst
··· 27 27 bitops 28 28 reset 29 29 s390-dasd-ipl 30 + clocks
+1 -1
hw/acpi/cpu.c
··· 222 222 state->devs[i].arch_id = id_list->cpus[i].arch_id; 223 223 } 224 224 memory_region_init_io(&state->ctrl_reg, owner, &cpu_hotplug_ops, state, 225 - "acpi-mem-hotplug", ACPI_CPU_HOTPLUG_REG_LEN); 225 + "acpi-cpu-hotplug", ACPI_CPU_HOTPLUG_REG_LEN); 226 226 memory_region_add_subregion(as, base_addr, &state->ctrl_reg); 227 227 } 228 228
+24 -2
hw/arm/msf2-soc.c
··· 1 1 /* 2 2 * SmartFusion2 SoC emulation. 3 3 * 4 - * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com> 4 + * Copyright (c) 2017-2020 Subbaraya Sundeep <sundeep.lkml@gmail.com> 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a copy 7 7 * of this software and associated documentation files (the "Software"), to deal ··· 35 35 36 36 #define MSF2_TIMER_BASE 0x40004000 37 37 #define MSF2_SYSREG_BASE 0x40038000 38 + #define MSF2_EMAC_BASE 0x40041000 38 39 39 40 #define ENVM_BASE_ADDRESS 0x60000000 40 41 41 42 #define SRAM_BASE_ADDRESS 0x20000000 42 43 44 + #define MSF2_EMAC_IRQ 12 45 + 43 46 #define MSF2_ENVM_MAX_SIZE (512 * KiB) 44 47 45 48 /* ··· 80 83 for (i = 0; i < MSF2_NUM_SPIS; i++) { 81 84 sysbus_init_child_obj(obj, "spi[*]", &s->spi[i], sizeof(s->spi[i]), 82 85 TYPE_MSS_SPI); 86 + } 87 + 88 + sysbus_init_child_obj(obj, "emac", &s->emac, sizeof(s->emac), 89 + TYPE_MSS_EMAC); 90 + if (nd_table[0].used) { 91 + qemu_check_nic_model(&nd_table[0], TYPE_MSS_EMAC); 92 + qdev_set_nic_properties(DEVICE(&s->emac), &nd_table[0]); 83 93 } 84 94 } 85 95 ··· 192 202 g_free(bus_name); 193 203 } 194 204 205 + dev = DEVICE(&s->emac); 206 + object_property_set_link(OBJECT(&s->emac), OBJECT(get_system_memory()), 207 + "ahb-bus", &error_abort); 208 + object_property_set_bool(OBJECT(&s->emac), true, "realized", &err); 209 + if (err != NULL) { 210 + error_propagate(errp, err); 211 + return; 212 + } 213 + busdev = SYS_BUS_DEVICE(dev); 214 + sysbus_mmio_map(busdev, 0, MSF2_EMAC_BASE); 215 + sysbus_connect_irq(busdev, 0, 216 + qdev_get_gpio_in(armv7m, MSF2_EMAC_IRQ)); 217 + 195 218 /* Below devices are not modelled yet. */ 196 219 create_unimplemented_device("i2c_0", 0x40002000, 0x1000); 197 220 create_unimplemented_device("dma", 0x40003000, 0x1000); ··· 202 225 create_unimplemented_device("can", 0x40015000, 0x1000); 203 226 create_unimplemented_device("rtc", 0x40017000, 0x1000); 204 227 create_unimplemented_device("apb_config", 0x40020000, 0x10000); 205 - create_unimplemented_device("emac", 0x40041000, 0x1000); 206 228 create_unimplemented_device("usb", 0x40043000, 0x1000); 207 229 } 208 230
+19 -1
hw/arm/virt.c
··· 77 77 #include "hw/acpi/generic_event_device.h" 78 78 #include "hw/virtio/virtio-iommu.h" 79 79 #include "hw/char/pl011.h" 80 + #include "qemu/guest-random.h" 80 81 81 82 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \ 82 83 static void virt_##major##_##minor##_class_init(ObjectClass *oc, \ ··· 213 214 return false; 214 215 } 215 216 217 + static void create_kaslr_seed(VirtMachineState *vms, const char *node) 218 + { 219 + Error *err = NULL; 220 + uint64_t seed; 221 + 222 + if (qemu_guest_getrandom(&seed, sizeof(seed), &err)) { 223 + error_free(err); 224 + return; 225 + } 226 + qemu_fdt_setprop_u64(vms->fdt, node, "kaslr-seed", seed); 227 + } 228 + 216 229 static void create_fdt(VirtMachineState *vms) 217 230 { 218 231 MachineState *ms = MACHINE(vms); ··· 233 246 234 247 /* /chosen must exist for load_dtb to fill in necessary properties later */ 235 248 qemu_fdt_add_subnode(fdt, "/chosen"); 249 + create_kaslr_seed(vms, "/chosen"); 250 + 251 + if (vms->secure) { 252 + qemu_fdt_add_subnode(fdt, "/secure-chosen"); 253 + create_kaslr_seed(vms, "/secure-chosen"); 254 + } 236 255 237 256 /* Clock node, for the benefit of the UART. The kernel device tree 238 257 * binding documentation claims the PL011 node clock properties are ··· 761 780 qemu_fdt_setprop_string(vms->fdt, nodename, "status", "disabled"); 762 781 qemu_fdt_setprop_string(vms->fdt, nodename, "secure-status", "okay"); 763 782 764 - qemu_fdt_add_subnode(vms->fdt, "/secure-chosen"); 765 783 qemu_fdt_setprop_string(vms->fdt, "/secure-chosen", "stdout-path", 766 784 nodename); 767 785 }
+49 -8
hw/arm/xilinx_zynq.c
··· 35 35 #include "hw/char/cadence_uart.h" 36 36 #include "hw/net/cadence_gem.h" 37 37 #include "hw/cpu/a9mpcore.h" 38 + #include "hw/qdev-clock.h" 39 + #include "sysemu/reset.h" 40 + 41 + #define TYPE_ZYNQ_MACHINE MACHINE_TYPE_NAME("xilinx-zynq-a9") 42 + #define ZYNQ_MACHINE(obj) \ 43 + OBJECT_CHECK(ZynqMachineState, (obj), TYPE_ZYNQ_MACHINE) 44 + 45 + /* board base frequency: 33.333333 MHz */ 46 + #define PS_CLK_FREQUENCY (100 * 1000 * 1000 / 3) 38 47 39 48 #define NUM_SPI_FLASHES 4 40 49 #define NUM_QSPI_FLASHES 2 ··· 74 83 0xe3001000 + ARMV7_IMM16(extract32((val), 0, 16)), /* movw r1 ... */ \ 75 84 0xe3401000 + ARMV7_IMM16(extract32((val), 16, 16)), /* movt r1 ... */ \ 76 85 0xe5801000 + (addr) 86 + 87 + typedef struct ZynqMachineState { 88 + MachineState parent; 89 + Clock *ps_clk; 90 + } ZynqMachineState; 77 91 78 92 static void zynq_write_board_setup(ARMCPU *cpu, 79 93 const struct arm_boot_info *info) ··· 159 173 160 174 static void zynq_init(MachineState *machine) 161 175 { 176 + ZynqMachineState *zynq_machine = ZYNQ_MACHINE(machine); 162 177 ARMCPU *cpu; 163 178 MemoryRegion *address_space_mem = get_system_memory(); 164 179 MemoryRegion *ocm_ram = g_new(MemoryRegion, 1); 165 - DeviceState *dev; 180 + DeviceState *dev, *slcr; 166 181 SysBusDevice *busdev; 167 182 qemu_irq pic[64]; 168 183 int n; ··· 206 221 1, 0x0066, 0x0022, 0x0000, 0x0000, 0x0555, 0x2aa, 207 222 0); 208 223 209 - dev = qdev_create(NULL, "xilinx,zynq_slcr"); 210 - qdev_init_nofail(dev); 211 - sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, 0xF8000000); 224 + /* Create slcr, keep a pointer to connect clocks */ 225 + slcr = qdev_create(NULL, "xilinx,zynq_slcr"); 226 + qdev_init_nofail(slcr); 227 + sysbus_mmio_map(SYS_BUS_DEVICE(slcr), 0, 0xF8000000); 228 + 229 + /* Create the main clock source, and feed slcr with it */ 230 + zynq_machine->ps_clk = CLOCK(object_new(TYPE_CLOCK)); 231 + object_property_add_child(OBJECT(zynq_machine), "ps_clk", 232 + OBJECT(zynq_machine->ps_clk), &error_abort); 233 + object_unref(OBJECT(zynq_machine->ps_clk)); 234 + clock_set_hz(zynq_machine->ps_clk, PS_CLK_FREQUENCY); 235 + qdev_connect_clock_in(slcr, "ps_clk", zynq_machine->ps_clk); 212 236 213 237 dev = qdev_create(NULL, TYPE_A9MPCORE_PRIV); 214 238 qdev_prop_set_uint32(dev, "num-cpu", 1); ··· 229 253 sysbus_create_simple(TYPE_CHIPIDEA, 0xE0002000, pic[53 - IRQ_OFFSET]); 230 254 sysbus_create_simple(TYPE_CHIPIDEA, 0xE0003000, pic[76 - IRQ_OFFSET]); 231 255 232 - cadence_uart_create(0xE0000000, pic[59 - IRQ_OFFSET], serial_hd(0)); 233 - cadence_uart_create(0xE0001000, pic[82 - IRQ_OFFSET], serial_hd(1)); 256 + dev = cadence_uart_create(0xE0000000, pic[59 - IRQ_OFFSET], serial_hd(0)); 257 + qdev_connect_clock_in(dev, "refclk", 258 + qdev_get_clock_out(slcr, "uart0_ref_clk")); 259 + dev = cadence_uart_create(0xE0001000, pic[82 - IRQ_OFFSET], serial_hd(1)); 260 + qdev_connect_clock_in(dev, "refclk", 261 + qdev_get_clock_out(slcr, "uart1_ref_clk")); 234 262 235 263 sysbus_create_varargs("cadence_ttc", 0xF8001000, 236 264 pic[42-IRQ_OFFSET], pic[43-IRQ_OFFSET], pic[44-IRQ_OFFSET], NULL); ··· 308 336 arm_load_kernel(ARM_CPU(first_cpu), machine, &zynq_binfo); 309 337 } 310 338 311 - static void zynq_machine_init(MachineClass *mc) 339 + static void zynq_machine_class_init(ObjectClass *oc, void *data) 312 340 { 341 + MachineClass *mc = MACHINE_CLASS(oc); 313 342 mc->desc = "Xilinx Zynq Platform Baseboard for Cortex-A9"; 314 343 mc->init = zynq_init; 315 344 mc->max_cpus = 1; ··· 319 348 mc->default_ram_id = "zynq.ext_ram"; 320 349 } 321 350 322 - DEFINE_MACHINE("xilinx-zynq-a9", zynq_machine_init) 351 + static const TypeInfo zynq_machine_type = { 352 + .name = TYPE_ZYNQ_MACHINE, 353 + .parent = TYPE_MACHINE, 354 + .class_init = zynq_machine_class_init, 355 + .instance_size = sizeof(ZynqMachineState), 356 + }; 357 + 358 + static void zynq_machine_register_types(void) 359 + { 360 + type_register_static(&zynq_machine_type); 361 + } 362 + 363 + type_init(zynq_machine_register_types)
+2
hw/arm/xlnx-versal.c
··· 205 205 206 206 dev = qdev_create(NULL, "xlnx.zdma"); 207 207 s->lpd.iou.adma[i] = SYS_BUS_DEVICE(dev); 208 + object_property_set_int(OBJECT(s->lpd.iou.adma[i]), 128, "bus-width", 209 + &error_abort); 208 210 object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal); 209 211 qdev_init_nofail(dev); 210 212
+35 -4
hw/arm/xlnx-zcu102.c
··· 23 23 #include "qemu/error-report.h" 24 24 #include "qemu/log.h" 25 25 #include "sysemu/qtest.h" 26 + #include "sysemu/device_tree.h" 26 27 27 28 typedef struct XlnxZCU102 { 28 29 MachineState parent_obj; ··· 31 32 32 33 bool secure; 33 34 bool virt; 35 + 36 + struct arm_boot_info binfo; 34 37 } XlnxZCU102; 35 38 36 39 #define TYPE_ZCU102_MACHINE MACHINE_TYPE_NAME("xlnx-zcu102") 37 40 #define ZCU102_MACHINE(obj) \ 38 41 OBJECT_CHECK(XlnxZCU102, (obj), TYPE_ZCU102_MACHINE) 39 42 40 - static struct arm_boot_info xlnx_zcu102_binfo; 41 43 42 44 static bool zcu102_get_secure(Object *obj, Error **errp) 43 45 { ··· 65 67 XlnxZCU102 *s = ZCU102_MACHINE(obj); 66 68 67 69 s->virt = value; 70 + } 71 + 72 + static void zcu102_modify_dtb(const struct arm_boot_info *binfo, void *fdt) 73 + { 74 + XlnxZCU102 *s = container_of(binfo, XlnxZCU102, binfo); 75 + bool method_is_hvc; 76 + char **node_path; 77 + const char *r; 78 + int prop_len; 79 + int i; 80 + 81 + /* If EL3 is enabled, we keep all firmware nodes active. */ 82 + if (!s->secure) { 83 + node_path = qemu_fdt_node_path(fdt, NULL, "xlnx,zynqmp-firmware", 84 + &error_fatal); 85 + 86 + for (i = 0; node_path && node_path[i]; i++) { 87 + r = qemu_fdt_getprop(fdt, node_path[i], "method", &prop_len, NULL); 88 + method_is_hvc = r && !strcmp("hvc", r); 89 + 90 + /* Allow HVC based firmware if EL2 is enabled. */ 91 + if (method_is_hvc && s->virt) { 92 + continue; 93 + } 94 + qemu_fdt_setprop_string(fdt, node_path[i], "status", "disabled"); 95 + } 96 + g_strfreev(node_path); 97 + } 68 98 } 69 99 70 100 static void xlnx_zcu102_init(MachineState *machine) ··· 166 196 167 197 /* TODO create and connect IDE devices for ide_drive_get() */ 168 198 169 - xlnx_zcu102_binfo.ram_size = ram_size; 170 - xlnx_zcu102_binfo.loader_start = 0; 171 - arm_load_kernel(s->soc.boot_cpu_ptr, machine, &xlnx_zcu102_binfo); 199 + s->binfo.ram_size = ram_size; 200 + s->binfo.loader_start = 0; 201 + s->binfo.modify_dtb = zcu102_modify_dtb; 202 + arm_load_kernel(s->soc.boot_cpu_ptr, machine, &s->binfo); 172 203 } 173 204 174 205 static void xlnx_zcu102_machine_instance_init(Object *obj)
+63 -10
hw/char/cadence_uart.c
··· 31 31 #include "qemu/module.h" 32 32 #include "hw/char/cadence_uart.h" 33 33 #include "hw/irq.h" 34 + #include "hw/qdev-clock.h" 35 + #include "trace.h" 34 36 35 37 #ifdef CADENCE_UART_ERR_DEBUG 36 38 #define DB_PRINT(...) do { \ ··· 97 99 #define LOCAL_LOOPBACK (0x2 << UART_MR_CHMODE_SH) 98 100 #define REMOTE_LOOPBACK (0x3 << UART_MR_CHMODE_SH) 99 101 100 - #define UART_INPUT_CLK 50000000 102 + #define UART_DEFAULT_REF_CLK (50 * 1000 * 1000) 101 103 102 104 #define R_CR (0x00/4) 103 105 #define R_MR (0x04/4) ··· 171 173 static void uart_parameters_setup(CadenceUARTState *s) 172 174 { 173 175 QEMUSerialSetParams ssp; 174 - unsigned int baud_rate, packet_size; 176 + unsigned int baud_rate, packet_size, input_clk; 177 + input_clk = clock_get_hz(s->refclk); 178 + 179 + baud_rate = (s->r[R_MR] & UART_MR_CLKS) ? input_clk / 8 : input_clk; 180 + baud_rate /= (s->r[R_BRGR] * (s->r[R_BDIV] + 1)); 181 + trace_cadence_uart_baudrate(baud_rate); 175 182 176 - baud_rate = (s->r[R_MR] & UART_MR_CLKS) ? 177 - UART_INPUT_CLK / 8 : UART_INPUT_CLK; 183 + ssp.speed = baud_rate; 178 184 179 - ssp.speed = baud_rate / (s->r[R_BRGR] * (s->r[R_BDIV] + 1)); 180 185 packet_size = 1; 181 186 182 187 switch (s->r[R_MR] & UART_MR_PAR) { ··· 215 220 } 216 221 217 222 packet_size += ssp.data_bits + ssp.stop_bits; 223 + if (ssp.speed == 0) { 224 + /* 225 + * Avoid division-by-zero below. 226 + * TODO: find something better 227 + */ 228 + ssp.speed = 1; 229 + } 218 230 s->char_tx_time = (NANOSECONDS_PER_SECOND / ssp.speed) * packet_size; 219 231 qemu_chr_fe_ioctl(&s->chr, CHR_IOCTL_SERIAL_SET_PARAMS, &ssp); 220 232 } ··· 340 352 CadenceUARTState *s = opaque; 341 353 uint32_t ch_mode = s->r[R_MR] & UART_MR_CHMODE; 342 354 355 + /* ignore characters when unclocked or in reset */ 356 + if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) { 357 + return; 358 + } 359 + 343 360 if (ch_mode == NORMAL_MODE || ch_mode == ECHO_MODE) { 344 361 uart_write_rx_fifo(opaque, buf, size); 345 362 } ··· 353 370 CadenceUARTState *s = opaque; 354 371 uint8_t buf = '\0'; 355 372 373 + /* ignore characters when unclocked or in reset */ 374 + if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) { 375 + return; 376 + } 377 + 356 378 if (event == CHR_EVENT_BREAK) { 357 379 uart_write_rx_fifo(opaque, &buf, 1); 358 380 } ··· 462 484 .endianness = DEVICE_NATIVE_ENDIAN, 463 485 }; 464 486 465 - static void cadence_uart_reset(DeviceState *dev) 487 + static void cadence_uart_reset_init(Object *obj, ResetType type) 466 488 { 467 - CadenceUARTState *s = CADENCE_UART(dev); 489 + CadenceUARTState *s = CADENCE_UART(obj); 468 490 469 491 s->r[R_CR] = 0x00000128; 470 492 s->r[R_IMR] = 0; ··· 473 495 s->r[R_BRGR] = 0x0000028B; 474 496 s->r[R_BDIV] = 0x0000000F; 475 497 s->r[R_TTRIG] = 0x00000020; 498 + } 499 + 500 + static void cadence_uart_reset_hold(Object *obj) 501 + { 502 + CadenceUARTState *s = CADENCE_UART(obj); 476 503 477 504 uart_rx_reset(s); 478 505 uart_tx_reset(s); ··· 491 518 uart_event, NULL, s, NULL, true); 492 519 } 493 520 521 + static void cadence_uart_refclk_update(void *opaque) 522 + { 523 + CadenceUARTState *s = opaque; 524 + 525 + /* recompute uart's speed on clock change */ 526 + uart_parameters_setup(s); 527 + } 528 + 494 529 static void cadence_uart_init(Object *obj) 495 530 { 496 531 SysBusDevice *sbd = SYS_BUS_DEVICE(obj); ··· 500 535 sysbus_init_mmio(sbd, &s->iomem); 501 536 sysbus_init_irq(sbd, &s->irq); 502 537 538 + s->refclk = qdev_init_clock_in(DEVICE(obj), "refclk", 539 + cadence_uart_refclk_update, s); 540 + /* initialize the frequency in case the clock remains unconnected */ 541 + clock_set_hz(s->refclk, UART_DEFAULT_REF_CLK); 542 + 503 543 s->char_tx_time = (NANOSECONDS_PER_SECOND / 9600) * 10; 504 544 } 505 545 546 + static int cadence_uart_pre_load(void *opaque) 547 + { 548 + CadenceUARTState *s = opaque; 549 + 550 + /* the frequency will be overriden if the refclk field is present */ 551 + clock_set_hz(s->refclk, UART_DEFAULT_REF_CLK); 552 + return 0; 553 + } 554 + 506 555 static int cadence_uart_post_load(void *opaque, int version_id) 507 556 { 508 557 CadenceUARTState *s = opaque; ··· 521 570 522 571 static const VMStateDescription vmstate_cadence_uart = { 523 572 .name = "cadence_uart", 524 - .version_id = 2, 573 + .version_id = 3, 525 574 .minimum_version_id = 2, 575 + .pre_load = cadence_uart_pre_load, 526 576 .post_load = cadence_uart_post_load, 527 577 .fields = (VMStateField[]) { 528 578 VMSTATE_UINT32_ARRAY(r, CadenceUARTState, CADENCE_UART_R_MAX), ··· 534 584 VMSTATE_UINT32(tx_count, CadenceUARTState), 535 585 VMSTATE_UINT32(rx_wpos, CadenceUARTState), 536 586 VMSTATE_TIMER_PTR(fifo_trigger_handle, CadenceUARTState), 587 + VMSTATE_CLOCK_V(refclk, CadenceUARTState, 3), 537 588 VMSTATE_END_OF_LIST() 538 - } 589 + }, 539 590 }; 540 591 541 592 static Property cadence_uart_properties[] = { ··· 546 597 static void cadence_uart_class_init(ObjectClass *klass, void *data) 547 598 { 548 599 DeviceClass *dc = DEVICE_CLASS(klass); 600 + ResettableClass *rc = RESETTABLE_CLASS(klass); 549 601 550 602 dc->realize = cadence_uart_realize; 551 603 dc->vmsd = &vmstate_cadence_uart; 552 - dc->reset = cadence_uart_reset; 604 + rc->phases.enter = cadence_uart_reset_init; 605 + rc->phases.hold = cadence_uart_reset_hold; 553 606 device_class_set_props(dc, cadence_uart_properties); 554 607 } 555 608
+3
hw/char/trace-events
··· 97 97 exynos_uart_rxsize(uint32_t channel, uint32_t size) "UART%d: Rx FIFO size: %d" 98 98 exynos_uart_channel_error(uint32_t channel) "Wrong UART channel number: %d" 99 99 exynos_uart_rx_timeout(uint32_t channel, uint32_t stat, uint32_t intsp) "UART%d: Rx timeout stat=0x%x intsp=0x%x" 100 + 101 + # hw/char/cadence_uart.c 102 + cadence_uart_baudrate(unsigned baudrate) "baudrate %u"
+2
hw/core/Makefile.objs
··· 7 7 common-obj-y += vmstate-if.o 8 8 # irq.o needed for qdev GPIO handling: 9 9 common-obj-y += irq.o 10 + common-obj-y += clock.o qdev-clock.o 10 11 11 12 common-obj-$(CONFIG_SOFTMMU) += reset.o 12 13 common-obj-$(CONFIG_SOFTMMU) += qdev-fw.o ··· 20 21 common-obj-$(CONFIG_SOFTMMU) += loader.o 21 22 common-obj-$(CONFIG_SOFTMMU) += machine-hmp-cmds.o 22 23 common-obj-$(CONFIG_SOFTMMU) += numa.o 24 + common-obj-$(CONFIG_SOFTMMU) += clock-vmstate.o 23 25 obj-$(CONFIG_SOFTMMU) += machine-qmp-cmds.o 24 26 25 27 common-obj-$(CONFIG_EMPTY_SLOT) += empty_slot.o
+25
hw/core/clock-vmstate.c
··· 1 + /* 2 + * Clock migration structure 3 + * 4 + * Copyright GreenSocs 2019-2020 5 + * 6 + * Authors: 7 + * Damien Hedde 8 + * 9 + * This work is licensed under the terms of the GNU GPL, version 2 or later. 10 + * See the COPYING file in the top-level directory. 11 + */ 12 + 13 + #include "qemu/osdep.h" 14 + #include "migration/vmstate.h" 15 + #include "hw/clock.h" 16 + 17 + const VMStateDescription vmstate_clock = { 18 + .name = "clock", 19 + .version_id = 0, 20 + .minimum_version_id = 0, 21 + .fields = (VMStateField[]) { 22 + VMSTATE_UINT64(period, Clock), 23 + VMSTATE_END_OF_LIST() 24 + } 25 + };
+130
hw/core/clock.c
··· 1 + /* 2 + * Hardware Clocks 3 + * 4 + * Copyright GreenSocs 2016-2020 5 + * 6 + * Authors: 7 + * Frederic Konrad 8 + * Damien Hedde 9 + * 10 + * This work is licensed under the terms of the GNU GPL, version 2 or later. 11 + * See the COPYING file in the top-level directory. 12 + */ 13 + 14 + #include "qemu/osdep.h" 15 + #include "hw/clock.h" 16 + #include "trace.h" 17 + 18 + #define CLOCK_PATH(_clk) (_clk->canonical_path) 19 + 20 + void clock_setup_canonical_path(Clock *clk) 21 + { 22 + g_free(clk->canonical_path); 23 + clk->canonical_path = object_get_canonical_path(OBJECT(clk)); 24 + } 25 + 26 + void clock_set_callback(Clock *clk, ClockCallback *cb, void *opaque) 27 + { 28 + clk->callback = cb; 29 + clk->callback_opaque = opaque; 30 + } 31 + 32 + void clock_clear_callback(Clock *clk) 33 + { 34 + clock_set_callback(clk, NULL, NULL); 35 + } 36 + 37 + void clock_set(Clock *clk, uint64_t period) 38 + { 39 + trace_clock_set(CLOCK_PATH(clk), CLOCK_PERIOD_TO_NS(clk->period), 40 + CLOCK_PERIOD_TO_NS(period)); 41 + clk->period = period; 42 + } 43 + 44 + static void clock_propagate_period(Clock *clk, bool call_callbacks) 45 + { 46 + Clock *child; 47 + 48 + QLIST_FOREACH(child, &clk->children, sibling) { 49 + if (child->period != clk->period) { 50 + child->period = clk->period; 51 + trace_clock_update(CLOCK_PATH(child), CLOCK_PATH(clk), 52 + CLOCK_PERIOD_TO_NS(clk->period), 53 + call_callbacks); 54 + if (call_callbacks && child->callback) { 55 + child->callback(child->callback_opaque); 56 + } 57 + clock_propagate_period(child, call_callbacks); 58 + } 59 + } 60 + } 61 + 62 + void clock_propagate(Clock *clk) 63 + { 64 + assert(clk->source == NULL); 65 + trace_clock_propagate(CLOCK_PATH(clk)); 66 + clock_propagate_period(clk, true); 67 + } 68 + 69 + void clock_set_source(Clock *clk, Clock *src) 70 + { 71 + /* changing clock source is not supported */ 72 + assert(!clk->source); 73 + 74 + trace_clock_set_source(CLOCK_PATH(clk), CLOCK_PATH(src)); 75 + 76 + clk->period = src->period; 77 + QLIST_INSERT_HEAD(&src->children, clk, sibling); 78 + clk->source = src; 79 + clock_propagate_period(clk, false); 80 + } 81 + 82 + static void clock_disconnect(Clock *clk) 83 + { 84 + if (clk->source == NULL) { 85 + return; 86 + } 87 + 88 + trace_clock_disconnect(CLOCK_PATH(clk)); 89 + 90 + clk->source = NULL; 91 + QLIST_REMOVE(clk, sibling); 92 + } 93 + 94 + static void clock_initfn(Object *obj) 95 + { 96 + Clock *clk = CLOCK(obj); 97 + 98 + QLIST_INIT(&clk->children); 99 + } 100 + 101 + static void clock_finalizefn(Object *obj) 102 + { 103 + Clock *clk = CLOCK(obj); 104 + Clock *child, *next; 105 + 106 + /* clear our list of children */ 107 + QLIST_FOREACH_SAFE(child, &clk->children, sibling, next) { 108 + clock_disconnect(child); 109 + } 110 + 111 + /* remove us from source's children list */ 112 + clock_disconnect(clk); 113 + 114 + g_free(clk->canonical_path); 115 + } 116 + 117 + static const TypeInfo clock_info = { 118 + .name = TYPE_CLOCK, 119 + .parent = TYPE_OBJECT, 120 + .instance_size = sizeof(Clock), 121 + .instance_init = clock_initfn, 122 + .instance_finalize = clock_finalizefn, 123 + }; 124 + 125 + static void clock_register_types(void) 126 + { 127 + type_register_static(&clock_info); 128 + } 129 + 130 + type_init(clock_register_types)
+185
hw/core/qdev-clock.c
··· 1 + /* 2 + * Device's clock input and output 3 + * 4 + * Copyright GreenSocs 2016-2020 5 + * 6 + * Authors: 7 + * Frederic Konrad 8 + * Damien Hedde 9 + * 10 + * This work is licensed under the terms of the GNU GPL, version 2 or later. 11 + * See the COPYING file in the top-level directory. 12 + */ 13 + 14 + #include "qemu/osdep.h" 15 + #include "hw/qdev-clock.h" 16 + #include "hw/qdev-core.h" 17 + #include "qapi/error.h" 18 + 19 + /* 20 + * qdev_init_clocklist: 21 + * Add a new clock in a device 22 + */ 23 + static NamedClockList *qdev_init_clocklist(DeviceState *dev, const char *name, 24 + bool output, Clock *clk) 25 + { 26 + NamedClockList *ncl; 27 + 28 + /* 29 + * Clock must be added before realize() so that we can compute the 30 + * clock's canonical path during device_realize(). 31 + */ 32 + assert(!dev->realized); 33 + 34 + /* 35 + * The ncl structure is freed by qdev_finalize_clocklist() which will 36 + * be called during @dev's device_finalize(). 37 + */ 38 + ncl = g_new0(NamedClockList, 1); 39 + ncl->name = g_strdup(name); 40 + ncl->output = output; 41 + ncl->alias = (clk != NULL); 42 + 43 + /* 44 + * Trying to create a clock whose name clashes with some other 45 + * clock or property is a bug in the caller and we will abort(). 46 + */ 47 + if (clk == NULL) { 48 + clk = CLOCK(object_new(TYPE_CLOCK)); 49 + object_property_add_child(OBJECT(dev), name, OBJECT(clk), &error_abort); 50 + if (output) { 51 + /* 52 + * Remove object_new()'s initial reference. 53 + * Note that for inputs, the reference created by object_new() 54 + * will be deleted in qdev_finalize_clocklist(). 55 + */ 56 + object_unref(OBJECT(clk)); 57 + } 58 + } else { 59 + object_property_add_link(OBJECT(dev), name, 60 + object_get_typename(OBJECT(clk)), 61 + (Object **) &ncl->clock, 62 + NULL, OBJ_PROP_LINK_STRONG, &error_abort); 63 + } 64 + 65 + ncl->clock = clk; 66 + 67 + QLIST_INSERT_HEAD(&dev->clocks, ncl, node); 68 + return ncl; 69 + } 70 + 71 + void qdev_finalize_clocklist(DeviceState *dev) 72 + { 73 + /* called by @dev's device_finalize() */ 74 + NamedClockList *ncl, *ncl_next; 75 + 76 + QLIST_FOREACH_SAFE(ncl, &dev->clocks, node, ncl_next) { 77 + QLIST_REMOVE(ncl, node); 78 + if (!ncl->output && !ncl->alias) { 79 + /* 80 + * We kept a reference on the input clock to ensure it lives up to 81 + * this point so we can safely remove the callback. 82 + * It avoids having a callback to a deleted object if ncl->clock 83 + * is still referenced somewhere else (eg: by a clock output). 84 + */ 85 + clock_clear_callback(ncl->clock); 86 + object_unref(OBJECT(ncl->clock)); 87 + } 88 + g_free(ncl->name); 89 + g_free(ncl); 90 + } 91 + } 92 + 93 + Clock *qdev_init_clock_out(DeviceState *dev, const char *name) 94 + { 95 + NamedClockList *ncl; 96 + 97 + assert(name); 98 + 99 + ncl = qdev_init_clocklist(dev, name, true, NULL); 100 + 101 + return ncl->clock; 102 + } 103 + 104 + Clock *qdev_init_clock_in(DeviceState *dev, const char *name, 105 + ClockCallback *callback, void *opaque) 106 + { 107 + NamedClockList *ncl; 108 + 109 + assert(name); 110 + 111 + ncl = qdev_init_clocklist(dev, name, false, NULL); 112 + 113 + if (callback) { 114 + clock_set_callback(ncl->clock, callback, opaque); 115 + } 116 + return ncl->clock; 117 + } 118 + 119 + void qdev_init_clocks(DeviceState *dev, const ClockPortInitArray clocks) 120 + { 121 + const struct ClockPortInitElem *elem; 122 + 123 + for (elem = &clocks[0]; elem->name != NULL; elem++) { 124 + Clock **clkp; 125 + /* offset cannot be inside the DeviceState part */ 126 + assert(elem->offset > sizeof(DeviceState)); 127 + clkp = (Clock **)(((void *) dev) + elem->offset); 128 + if (elem->is_output) { 129 + *clkp = qdev_init_clock_out(dev, elem->name); 130 + } else { 131 + *clkp = qdev_init_clock_in(dev, elem->name, elem->callback, dev); 132 + } 133 + } 134 + } 135 + 136 + static NamedClockList *qdev_get_clocklist(DeviceState *dev, const char *name) 137 + { 138 + NamedClockList *ncl; 139 + 140 + QLIST_FOREACH(ncl, &dev->clocks, node) { 141 + if (strcmp(name, ncl->name) == 0) { 142 + return ncl; 143 + } 144 + } 145 + 146 + return NULL; 147 + } 148 + 149 + Clock *qdev_get_clock_in(DeviceState *dev, const char *name) 150 + { 151 + NamedClockList *ncl; 152 + 153 + assert(name); 154 + 155 + ncl = qdev_get_clocklist(dev, name); 156 + assert(!ncl->output); 157 + 158 + return ncl->clock; 159 + } 160 + 161 + Clock *qdev_get_clock_out(DeviceState *dev, const char *name) 162 + { 163 + NamedClockList *ncl; 164 + 165 + assert(name); 166 + 167 + ncl = qdev_get_clocklist(dev, name); 168 + assert(ncl->output); 169 + 170 + return ncl->clock; 171 + } 172 + 173 + Clock *qdev_alias_clock(DeviceState *dev, const char *name, 174 + DeviceState *alias_dev, const char *alias_name) 175 + { 176 + NamedClockList *ncl; 177 + 178 + assert(name && alias_name); 179 + 180 + ncl = qdev_get_clocklist(dev, name); 181 + 182 + qdev_init_clocklist(alias_dev, alias_name, ncl->output, ncl->clock); 183 + 184 + return ncl->clock; 185 + }
+12
hw/core/qdev.c
··· 37 37 #include "hw/qdev-properties.h" 38 38 #include "hw/boards.h" 39 39 #include "hw/sysbus.h" 40 + #include "hw/qdev-clock.h" 40 41 #include "migration/vmstate.h" 41 42 #include "trace.h" 42 43 ··· 855 856 DeviceClass *dc = DEVICE_GET_CLASS(dev); 856 857 HotplugHandler *hotplug_ctrl; 857 858 BusState *bus; 859 + NamedClockList *ncl; 858 860 Error *local_err = NULL; 859 861 bool unattached_parent = false; 860 862 static int unattached_count; ··· 902 904 */ 903 905 g_free(dev->canonical_path); 904 906 dev->canonical_path = object_get_canonical_path(OBJECT(dev)); 907 + QLIST_FOREACH(ncl, &dev->clocks, node) { 908 + if (ncl->alias) { 909 + continue; 910 + } else { 911 + clock_setup_canonical_path(ncl->clock); 912 + } 913 + } 905 914 906 915 if (qdev_get_vmsd(dev)) { 907 916 if (vmstate_register_with_alias_id(VMSTATE_IF(dev), ··· 1025 1034 dev->allow_unplug_during_migration = false; 1026 1035 1027 1036 QLIST_INIT(&dev->gpios); 1037 + QLIST_INIT(&dev->clocks); 1028 1038 } 1029 1039 1030 1040 static void device_post_init(Object *obj) ··· 1053 1063 * here 1054 1064 */ 1055 1065 } 1066 + 1067 + qdev_finalize_clocklist(dev); 1056 1068 1057 1069 /* Only send event if the device had been completely realized */ 1058 1070 if (dev->pending_deleted_event) {
+7
hw/core/trace-events
··· 27 27 resettable_phase_exit_exec(void *obj, const char *objtype, int has_method) "obj=%p(%s) method=%d" 28 28 resettable_phase_exit_end(void *obj, const char *objtype, unsigned count) "obj=%p(%s) count=%d" 29 29 resettable_transitional_function(void *obj, const char *objtype) "obj=%p(%s)" 30 + 31 + # clock.c 32 + clock_set_source(const char *clk, const char *src) "'%s', src='%s'" 33 + clock_disconnect(const char *clk) "'%s'" 34 + clock_set(const char *clk, uint64_t old, uint64_t new) "'%s', ns=%"PRIu64"->%"PRIu64 35 + clock_propagate(const char *clk) "'%s'" 36 + clock_update(const char *clk, const char *src, uint64_t val, int cb) "'%s', src='%s', ns=%"PRIu64", cb=%d"
+17 -8
hw/dma/xlnx-zdma.c
··· 299 299 s->regs[basereg + 1] = addr >> 32; 300 300 } 301 301 302 - static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf) 302 + static void zdma_load_descriptor_reg(XlnxZDMA *s, unsigned int reg, 303 + XlnxZDMADescr *descr) 304 + { 305 + descr->addr = zdma_get_regaddr64(s, reg); 306 + descr->size = s->regs[reg + 2]; 307 + descr->attr = s->regs[reg + 3]; 308 + } 309 + 310 + static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, 311 + XlnxZDMADescr *descr) 303 312 { 304 313 /* ZDMA descriptors must be aligned to their own size. */ 305 314 if (addr % sizeof(XlnxZDMADescr)) { 306 315 qemu_log_mask(LOG_GUEST_ERROR, 307 316 "zdma: unaligned descriptor at %" PRIx64, 308 317 addr); 309 - memset(buf, 0x0, sizeof(XlnxZDMADescr)); 318 + memset(descr, 0x0, sizeof(XlnxZDMADescr)); 310 319 s->error = true; 311 320 return false; 312 321 } 313 322 314 - address_space_read(s->dma_as, addr, s->attr, buf, sizeof(XlnxZDMADescr)); 323 + descr->addr = address_space_ldq_le(s->dma_as, addr, s->attr, NULL); 324 + descr->size = address_space_ldl_le(s->dma_as, addr + 8, s->attr, NULL); 325 + descr->attr = address_space_ldl_le(s->dma_as, addr + 12, s->attr, NULL); 315 326 return true; 316 327 } 317 328 ··· 321 332 unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE); 322 333 323 334 if (ptype == PT_REG) { 324 - memcpy(&s->dsc_src, &s->regs[R_ZDMA_CH_SRC_DSCR_WORD0], 325 - sizeof(s->dsc_src)); 335 + zdma_load_descriptor_reg(s, R_ZDMA_CH_SRC_DSCR_WORD0, &s->dsc_src); 326 336 return; 327 337 } 328 338 ··· 344 354 } else { 345 355 addr = zdma_get_regaddr64(s, basereg); 346 356 addr += sizeof(s->dsc_dst); 347 - address_space_read(s->dma_as, addr, s->attr, (void *) &next, 8); 357 + next = address_space_ldq_le(s->dma_as, addr, s->attr, NULL); 348 358 } 349 359 350 360 zdma_put_regaddr64(s, basereg, next); ··· 357 367 bool dst_type; 358 368 359 369 if (ptype == PT_REG) { 360 - memcpy(&s->dsc_dst, &s->regs[R_ZDMA_CH_DST_DSCR_WORD0], 361 - sizeof(s->dsc_dst)); 370 + zdma_load_descriptor_reg(s, R_ZDMA_CH_DST_DSCR_WORD0, &s->dsc_dst); 362 371 return; 363 372 } 364 373
+1 -3
hw/intc/arm_gicv3_kvm.c
··· 658 658 659 659 static void arm_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri) 660 660 { 661 - ARMCPU *cpu; 662 661 GICv3State *s; 663 662 GICv3CPUState *c; 664 663 665 664 c = (GICv3CPUState *)env->gicv3state; 666 665 s = c->gic; 667 - cpu = ARM_CPU(c->cpu); 668 666 669 667 c->icc_pmr_el1 = 0; 670 668 c->icc_bpr[GICV3_G0] = GIC_MIN_BPR; ··· 681 679 682 680 /* Initialize to actual HW supported configuration */ 683 681 kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS, 684 - KVM_VGIC_ATTR(ICC_CTLR_EL1, cpu->mp_affinity), 682 + KVM_VGIC_ATTR(ICC_CTLR_EL1, c->gicr_typer), 685 683 &c->icc_ctlr_el1[GICV3_NS], false, &error_abort); 686 684 687 685 c->icc_ctlr_el1[GICV3_S] = c->icc_ctlr_el1[GICV3_NS];
+168 -4
hw/misc/zynq_slcr.c
··· 22 22 #include "qemu/log.h" 23 23 #include "qemu/module.h" 24 24 #include "hw/registerfields.h" 25 + #include "hw/qdev-clock.h" 25 26 26 27 #ifndef ZYNQ_SLCR_ERR_DEBUG 27 28 #define ZYNQ_SLCR_ERR_DEBUG 0 ··· 45 46 REG32(ARM_PLL_CTRL, 0x100) 46 47 REG32(DDR_PLL_CTRL, 0x104) 47 48 REG32(IO_PLL_CTRL, 0x108) 49 + /* fields for [ARM|DDR|IO]_PLL_CTRL registers */ 50 + FIELD(xxx_PLL_CTRL, PLL_RESET, 0, 1) 51 + FIELD(xxx_PLL_CTRL, PLL_PWRDWN, 1, 1) 52 + FIELD(xxx_PLL_CTRL, PLL_BYPASS_QUAL, 3, 1) 53 + FIELD(xxx_PLL_CTRL, PLL_BYPASS_FORCE, 4, 1) 54 + FIELD(xxx_PLL_CTRL, PLL_FPDIV, 12, 7) 48 55 REG32(PLL_STATUS, 0x10c) 49 56 REG32(ARM_PLL_CFG, 0x110) 50 57 REG32(DDR_PLL_CFG, 0x114) ··· 64 71 REG32(LQSPI_CLK_CTRL, 0x14c) 65 72 REG32(SDIO_CLK_CTRL, 0x150) 66 73 REG32(UART_CLK_CTRL, 0x154) 74 + FIELD(UART_CLK_CTRL, CLKACT0, 0, 1) 75 + FIELD(UART_CLK_CTRL, CLKACT1, 1, 1) 76 + FIELD(UART_CLK_CTRL, SRCSEL, 4, 2) 77 + FIELD(UART_CLK_CTRL, DIVISOR, 8, 6) 67 78 REG32(SPI_CLK_CTRL, 0x158) 68 79 REG32(CAN_CLK_CTRL, 0x15c) 69 80 REG32(CAN_MIOCLK_CTRL, 0x160) ··· 179 190 MemoryRegion iomem; 180 191 181 192 uint32_t regs[ZYNQ_SLCR_NUM_REGS]; 193 + 194 + Clock *ps_clk; 195 + Clock *uart0_ref_clk; 196 + Clock *uart1_ref_clk; 182 197 } ZynqSLCRState; 183 198 184 - static void zynq_slcr_reset(DeviceState *d) 199 + /* 200 + * return the output frequency of ARM/DDR/IO pll 201 + * using input frequency and PLL_CTRL register 202 + */ 203 + static uint64_t zynq_slcr_compute_pll(uint64_t input, uint32_t ctrl_reg) 185 204 { 186 - ZynqSLCRState *s = ZYNQ_SLCR(d); 205 + uint32_t mult = ((ctrl_reg & R_xxx_PLL_CTRL_PLL_FPDIV_MASK) >> 206 + R_xxx_PLL_CTRL_PLL_FPDIV_SHIFT); 207 + 208 + /* first, check if pll is bypassed */ 209 + if (ctrl_reg & R_xxx_PLL_CTRL_PLL_BYPASS_FORCE_MASK) { 210 + return input; 211 + } 212 + 213 + /* is pll disabled ? */ 214 + if (ctrl_reg & (R_xxx_PLL_CTRL_PLL_RESET_MASK | 215 + R_xxx_PLL_CTRL_PLL_PWRDWN_MASK)) { 216 + return 0; 217 + } 218 + 219 + /* frequency multiplier -> period division */ 220 + return input / mult; 221 + } 222 + 223 + /* 224 + * return the output period of a clock given: 225 + * + the periods in an array corresponding to input mux selector 226 + * + the register xxx_CLK_CTRL value 227 + * + enable bit index in ctrl register 228 + * 229 + * This function makes the assumption that the ctrl_reg value is organized as 230 + * follows: 231 + * + bits[13:8] clock frequency divisor 232 + * + bits[5:4] clock mux selector (index in array) 233 + * + bits[index] clock enable 234 + */ 235 + static uint64_t zynq_slcr_compute_clock(const uint64_t periods[], 236 + uint32_t ctrl_reg, 237 + unsigned index) 238 + { 239 + uint32_t srcsel = extract32(ctrl_reg, 4, 2); /* bits [5:4] */ 240 + uint32_t divisor = extract32(ctrl_reg, 8, 6); /* bits [13:8] */ 241 + 242 + /* first, check if clock is disabled */ 243 + if (((ctrl_reg >> index) & 1u) == 0) { 244 + return 0; 245 + } 246 + 247 + /* 248 + * according to the Zynq technical ref. manual UG585 v1.12.2 in 249 + * Clocks chapter, section 25.10.1 page 705: 250 + * "The 6-bit divider provides a divide range of 1 to 63" 251 + * We follow here what is implemented in linux kernel and consider 252 + * the 0 value as a bypass (no division). 253 + */ 254 + /* frequency divisor -> period multiplication */ 255 + return periods[srcsel] * (divisor ? divisor : 1u); 256 + } 257 + 258 + /* 259 + * macro helper around zynq_slcr_compute_clock to avoid repeating 260 + * the register name. 261 + */ 262 + #define ZYNQ_COMPUTE_CLK(state, plls, reg, enable_field) \ 263 + zynq_slcr_compute_clock((plls), (state)->regs[reg], \ 264 + reg ## _ ## enable_field ## _SHIFT) 265 + 266 + /** 267 + * Compute and set the ouputs clocks periods. 268 + * But do not propagate them further. Connected clocks 269 + * will not receive any updates (See zynq_slcr_compute_clocks()) 270 + */ 271 + static void zynq_slcr_compute_clocks(ZynqSLCRState *s) 272 + { 273 + uint64_t ps_clk = clock_get(s->ps_clk); 274 + 275 + /* consider outputs clocks are disabled while in reset */ 276 + if (device_is_in_reset(DEVICE(s))) { 277 + ps_clk = 0; 278 + } 279 + 280 + uint64_t io_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_IO_PLL_CTRL]); 281 + uint64_t arm_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_ARM_PLL_CTRL]); 282 + uint64_t ddr_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_DDR_PLL_CTRL]); 283 + 284 + uint64_t uart_mux[4] = {io_pll, io_pll, arm_pll, ddr_pll}; 285 + 286 + /* compute uartX reference clocks */ 287 + clock_set(s->uart0_ref_clk, 288 + ZYNQ_COMPUTE_CLK(s, uart_mux, R_UART_CLK_CTRL, CLKACT0)); 289 + clock_set(s->uart1_ref_clk, 290 + ZYNQ_COMPUTE_CLK(s, uart_mux, R_UART_CLK_CTRL, CLKACT1)); 291 + } 292 + 293 + /** 294 + * Propagate the outputs clocks. 295 + * zynq_slcr_compute_clocks() should have been called before 296 + * to configure them. 297 + */ 298 + static void zynq_slcr_propagate_clocks(ZynqSLCRState *s) 299 + { 300 + clock_propagate(s->uart0_ref_clk); 301 + clock_propagate(s->uart1_ref_clk); 302 + } 303 + 304 + static void zynq_slcr_ps_clk_callback(void *opaque) 305 + { 306 + ZynqSLCRState *s = (ZynqSLCRState *) opaque; 307 + zynq_slcr_compute_clocks(s); 308 + zynq_slcr_propagate_clocks(s); 309 + } 310 + 311 + static void zynq_slcr_reset_init(Object *obj, ResetType type) 312 + { 313 + ZynqSLCRState *s = ZYNQ_SLCR(obj); 187 314 int i; 188 315 189 316 DB_PRINT("RESET\n"); ··· 277 404 s->regs[R_DDRIOB + 12] = 0x00000021; 278 405 } 279 406 407 + static void zynq_slcr_reset_hold(Object *obj) 408 + { 409 + ZynqSLCRState *s = ZYNQ_SLCR(obj); 410 + 411 + /* will disable all output clocks */ 412 + zynq_slcr_compute_clocks(s); 413 + zynq_slcr_propagate_clocks(s); 414 + } 415 + 416 + static void zynq_slcr_reset_exit(Object *obj) 417 + { 418 + ZynqSLCRState *s = ZYNQ_SLCR(obj); 419 + 420 + /* will compute output clocks according to ps_clk and registers */ 421 + zynq_slcr_compute_clocks(s); 422 + zynq_slcr_propagate_clocks(s); 423 + } 280 424 281 425 static bool zynq_slcr_check_offset(hwaddr offset, bool rnw) 282 426 { ··· 409 553 qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); 410 554 } 411 555 break; 556 + case R_IO_PLL_CTRL: 557 + case R_ARM_PLL_CTRL: 558 + case R_DDR_PLL_CTRL: 559 + case R_UART_CLK_CTRL: 560 + zynq_slcr_compute_clocks(s); 561 + zynq_slcr_propagate_clocks(s); 562 + break; 412 563 } 413 564 } 414 565 ··· 416 567 .read = zynq_slcr_read, 417 568 .write = zynq_slcr_write, 418 569 .endianness = DEVICE_NATIVE_ENDIAN, 570 + }; 571 + 572 + static const ClockPortInitArray zynq_slcr_clocks = { 573 + QDEV_CLOCK_IN(ZynqSLCRState, ps_clk, zynq_slcr_ps_clk_callback), 574 + QDEV_CLOCK_OUT(ZynqSLCRState, uart0_ref_clk), 575 + QDEV_CLOCK_OUT(ZynqSLCRState, uart1_ref_clk), 576 + QDEV_CLOCK_END 419 577 }; 420 578 421 579 static void zynq_slcr_init(Object *obj) ··· 425 583 memory_region_init_io(&s->iomem, obj, &slcr_ops, s, "slcr", 426 584 ZYNQ_SLCR_MMIO_SIZE); 427 585 sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->iomem); 586 + 587 + qdev_init_clocks(DEVICE(obj), zynq_slcr_clocks); 428 588 } 429 589 430 590 static const VMStateDescription vmstate_zynq_slcr = { 431 591 .name = "zynq_slcr", 432 - .version_id = 2, 592 + .version_id = 3, 433 593 .minimum_version_id = 2, 434 594 .fields = (VMStateField[]) { 435 595 VMSTATE_UINT32_ARRAY(regs, ZynqSLCRState, ZYNQ_SLCR_NUM_REGS), 596 + VMSTATE_CLOCK_V(ps_clk, ZynqSLCRState, 3), 436 597 VMSTATE_END_OF_LIST() 437 598 } 438 599 }; ··· 440 601 static void zynq_slcr_class_init(ObjectClass *klass, void *data) 441 602 { 442 603 DeviceClass *dc = DEVICE_CLASS(klass); 604 + ResettableClass *rc = RESETTABLE_CLASS(klass); 443 605 444 606 dc->vmsd = &vmstate_zynq_slcr; 445 - dc->reset = zynq_slcr_reset; 607 + rc->phases.enter = zynq_slcr_reset_init; 608 + rc->phases.hold = zynq_slcr_reset_hold; 609 + rc->phases.exit = zynq_slcr_reset_exit; 446 610 } 447 611 448 612 static const TypeInfo zynq_slcr_info = {
+1
hw/net/Makefile.objs
··· 55 55 obj-$(call lnot,$(CONFIG_ROCKER)) += rocker/qmp-norocker.o 56 56 57 57 common-obj-$(CONFIG_CAN_BUS) += can/ 58 + common-obj-$(CONFIG_MSF2) += msf2-emac.o
+15 -1
hw/net/cadence_gem.c
··· 411 411 desc[1] |= DESC_1_RX_SOF; 412 412 } 413 413 414 + static inline void rx_desc_clear_control(uint32_t *desc) 415 + { 416 + desc[1] = 0; 417 + } 418 + 414 419 static inline void rx_desc_set_eof(uint32_t *desc) 415 420 { 416 421 desc[1] |= DESC_1_RX_EOF; ··· 999 1004 rxbuf_ptr += MIN(bytes_to_copy, rxbufsize); 1000 1005 bytes_to_copy -= MIN(bytes_to_copy, rxbufsize); 1001 1006 1007 + rx_desc_clear_control(s->rx_desc[q]); 1008 + 1002 1009 /* Update the descriptor. */ 1003 1010 if (first_desc) { 1004 1011 rx_desc_set_sof(s->rx_desc[q]); ··· 1238 1245 /* read next descriptor */ 1239 1246 if (tx_desc_get_wrap(desc)) { 1240 1247 tx_desc_set_last(desc); 1241 - packet_desc_addr = s->regs[GEM_TXQBASE]; 1248 + 1249 + if (s->regs[GEM_DMACFG] & GEM_DMACFG_ADDR_64B) { 1250 + packet_desc_addr = s->regs[GEM_TBQPH]; 1251 + packet_desc_addr <<= 32; 1252 + } else { 1253 + packet_desc_addr = 0; 1254 + } 1255 + packet_desc_addr |= s->regs[GEM_TXQBASE]; 1242 1256 } else { 1243 1257 packet_desc_addr += 4 * gem_get_desc_len(s, false); 1244 1258 }
+589
hw/net/msf2-emac.c
··· 1 + /* 2 + * QEMU model of the Smartfusion2 Ethernet MAC. 3 + * 4 + * Copyright (c) 2020 Subbaraya Sundeep <sundeep.lkml@gmail.com>. 5 + * 6 + * Permission is hereby granted, free of charge, to any person obtaining a copy 7 + * of this software and associated documentation files (the "Software"), to deal 8 + * in the Software without restriction, including without limitation the rights 9 + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 + * copies of the Software, and to permit persons to whom the Software is 11 + * furnished to do so, subject to the following conditions: 12 + * 13 + * The above copyright notice and this permission notice shall be included in 14 + * all copies or substantial portions of the Software. 15 + * 16 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 19 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 22 + * THE SOFTWARE. 23 + * 24 + * Refer to section Ethernet MAC in the document: 25 + * UG0331: SmartFusion2 Microcontroller Subsystem User Guide 26 + * Datasheet URL: 27 + * https://www.microsemi.com/document-portal/cat_view/56661-internal-documents/ 28 + * 56758-soc?lang=en&limit=20&limitstart=220 29 + */ 30 + 31 + #include "qemu/osdep.h" 32 + #include "qemu-common.h" 33 + #include "qemu/log.h" 34 + #include "qapi/error.h" 35 + #include "exec/address-spaces.h" 36 + #include "hw/registerfields.h" 37 + #include "hw/net/msf2-emac.h" 38 + #include "hw/net/mii.h" 39 + #include "hw/irq.h" 40 + #include "hw/qdev-properties.h" 41 + #include "migration/vmstate.h" 42 + 43 + REG32(CFG1, 0x0) 44 + FIELD(CFG1, RESET, 31, 1) 45 + FIELD(CFG1, RX_EN, 2, 1) 46 + FIELD(CFG1, TX_EN, 0, 1) 47 + FIELD(CFG1, LB_EN, 8, 1) 48 + REG32(CFG2, 0x4) 49 + REG32(IFG, 0x8) 50 + REG32(HALF_DUPLEX, 0xc) 51 + REG32(MAX_FRAME_LENGTH, 0x10) 52 + REG32(MII_CMD, 0x24) 53 + FIELD(MII_CMD, READ, 0, 1) 54 + REG32(MII_ADDR, 0x28) 55 + FIELD(MII_ADDR, REGADDR, 0, 5) 56 + FIELD(MII_ADDR, PHYADDR, 8, 5) 57 + REG32(MII_CTL, 0x2c) 58 + REG32(MII_STS, 0x30) 59 + REG32(STA1, 0x40) 60 + REG32(STA2, 0x44) 61 + REG32(FIFO_CFG0, 0x48) 62 + REG32(FIFO_CFG4, 0x58) 63 + FIELD(FIFO_CFG4, BCAST, 9, 1) 64 + FIELD(FIFO_CFG4, MCAST, 8, 1) 65 + REG32(FIFO_CFG5, 0x5C) 66 + FIELD(FIFO_CFG5, BCAST, 9, 1) 67 + FIELD(FIFO_CFG5, MCAST, 8, 1) 68 + REG32(DMA_TX_CTL, 0x180) 69 + FIELD(DMA_TX_CTL, EN, 0, 1) 70 + REG32(DMA_TX_DESC, 0x184) 71 + REG32(DMA_TX_STATUS, 0x188) 72 + FIELD(DMA_TX_STATUS, PKTCNT, 16, 8) 73 + FIELD(DMA_TX_STATUS, UNDERRUN, 1, 1) 74 + FIELD(DMA_TX_STATUS, PKT_SENT, 0, 1) 75 + REG32(DMA_RX_CTL, 0x18c) 76 + FIELD(DMA_RX_CTL, EN, 0, 1) 77 + REG32(DMA_RX_DESC, 0x190) 78 + REG32(DMA_RX_STATUS, 0x194) 79 + FIELD(DMA_RX_STATUS, PKTCNT, 16, 8) 80 + FIELD(DMA_RX_STATUS, OVERFLOW, 2, 1) 81 + FIELD(DMA_RX_STATUS, PKT_RCVD, 0, 1) 82 + REG32(DMA_IRQ_MASK, 0x198) 83 + REG32(DMA_IRQ, 0x19c) 84 + 85 + #define EMPTY_MASK (1 << 31) 86 + #define PKT_SIZE 0x7FF 87 + #define PHYADDR 0x1 88 + #define MAX_PKT_SIZE 2048 89 + 90 + typedef struct { 91 + uint32_t pktaddr; 92 + uint32_t pktsize; 93 + uint32_t next; 94 + } EmacDesc; 95 + 96 + static uint32_t emac_get_isr(MSF2EmacState *s) 97 + { 98 + uint32_t ier = s->regs[R_DMA_IRQ_MASK]; 99 + uint32_t tx = s->regs[R_DMA_TX_STATUS] & 0xF; 100 + uint32_t rx = s->regs[R_DMA_RX_STATUS] & 0xF; 101 + uint32_t isr = (rx << 4) | tx; 102 + 103 + s->regs[R_DMA_IRQ] = ier & isr; 104 + return s->regs[R_DMA_IRQ]; 105 + } 106 + 107 + static void emac_update_irq(MSF2EmacState *s) 108 + { 109 + bool intr = emac_get_isr(s); 110 + 111 + qemu_set_irq(s->irq, intr); 112 + } 113 + 114 + static void emac_load_desc(MSF2EmacState *s, EmacDesc *d, hwaddr desc) 115 + { 116 + address_space_read(&s->dma_as, desc, MEMTXATTRS_UNSPECIFIED, d, sizeof *d); 117 + /* Convert from LE into host endianness. */ 118 + d->pktaddr = le32_to_cpu(d->pktaddr); 119 + d->pktsize = le32_to_cpu(d->pktsize); 120 + d->next = le32_to_cpu(d->next); 121 + } 122 + 123 + static void emac_store_desc(MSF2EmacState *s, EmacDesc *d, hwaddr desc) 124 + { 125 + /* Convert from host endianness into LE. */ 126 + d->pktaddr = cpu_to_le32(d->pktaddr); 127 + d->pktsize = cpu_to_le32(d->pktsize); 128 + d->next = cpu_to_le32(d->next); 129 + 130 + address_space_write(&s->dma_as, desc, MEMTXATTRS_UNSPECIFIED, d, sizeof *d); 131 + } 132 + 133 + static void msf2_dma_tx(MSF2EmacState *s) 134 + { 135 + NetClientState *nc = qemu_get_queue(s->nic); 136 + hwaddr desc = s->regs[R_DMA_TX_DESC]; 137 + uint8_t buf[MAX_PKT_SIZE]; 138 + EmacDesc d; 139 + int size; 140 + uint8_t pktcnt; 141 + uint32_t status; 142 + 143 + if (!(s->regs[R_CFG1] & R_CFG1_TX_EN_MASK)) { 144 + return; 145 + } 146 + 147 + while (1) { 148 + emac_load_desc(s, &d, desc); 149 + if (d.pktsize & EMPTY_MASK) { 150 + break; 151 + } 152 + size = d.pktsize & PKT_SIZE; 153 + address_space_read(&s->dma_as, d.pktaddr, MEMTXATTRS_UNSPECIFIED, 154 + buf, size); 155 + /* 156 + * This is very basic way to send packets. Ideally there should be 157 + * a FIFO and packets should be sent out from FIFO only when 158 + * R_CFG1 bit 0 is set. 159 + */ 160 + if (s->regs[R_CFG1] & R_CFG1_LB_EN_MASK) { 161 + nc->info->receive(nc, buf, size); 162 + } else { 163 + qemu_send_packet(nc, buf, size); 164 + } 165 + d.pktsize |= EMPTY_MASK; 166 + emac_store_desc(s, &d, desc); 167 + /* update sent packets count */ 168 + status = s->regs[R_DMA_TX_STATUS]; 169 + pktcnt = FIELD_EX32(status, DMA_TX_STATUS, PKTCNT); 170 + pktcnt++; 171 + s->regs[R_DMA_TX_STATUS] = FIELD_DP32(status, DMA_TX_STATUS, 172 + PKTCNT, pktcnt); 173 + s->regs[R_DMA_TX_STATUS] |= R_DMA_TX_STATUS_PKT_SENT_MASK; 174 + desc = d.next; 175 + } 176 + s->regs[R_DMA_TX_STATUS] |= R_DMA_TX_STATUS_UNDERRUN_MASK; 177 + s->regs[R_DMA_TX_CTL] &= ~R_DMA_TX_CTL_EN_MASK; 178 + } 179 + 180 + static void msf2_phy_update_link(MSF2EmacState *s) 181 + { 182 + /* Autonegotiation status mirrors link status. */ 183 + if (qemu_get_queue(s->nic)->link_down) { 184 + s->phy_regs[MII_BMSR] &= ~(MII_BMSR_AN_COMP | 185 + MII_BMSR_LINK_ST); 186 + } else { 187 + s->phy_regs[MII_BMSR] |= (MII_BMSR_AN_COMP | 188 + MII_BMSR_LINK_ST); 189 + } 190 + } 191 + 192 + static void msf2_phy_reset(MSF2EmacState *s) 193 + { 194 + memset(&s->phy_regs[0], 0, sizeof(s->phy_regs)); 195 + s->phy_regs[MII_BMCR] = 0x1140; 196 + s->phy_regs[MII_BMSR] = 0x7968; 197 + s->phy_regs[MII_PHYID1] = 0x0022; 198 + s->phy_regs[MII_PHYID2] = 0x1550; 199 + s->phy_regs[MII_ANAR] = 0x01E1; 200 + s->phy_regs[MII_ANLPAR] = 0xCDE1; 201 + 202 + msf2_phy_update_link(s); 203 + } 204 + 205 + static void write_to_phy(MSF2EmacState *s) 206 + { 207 + uint8_t reg_addr = s->regs[R_MII_ADDR] & R_MII_ADDR_REGADDR_MASK; 208 + uint8_t phy_addr = (s->regs[R_MII_ADDR] >> R_MII_ADDR_PHYADDR_SHIFT) & 209 + R_MII_ADDR_REGADDR_MASK; 210 + uint16_t data = s->regs[R_MII_CTL] & 0xFFFF; 211 + 212 + if (phy_addr != PHYADDR) { 213 + return; 214 + } 215 + 216 + switch (reg_addr) { 217 + case MII_BMCR: 218 + if (data & MII_BMCR_RESET) { 219 + /* Phy reset */ 220 + msf2_phy_reset(s); 221 + data &= ~MII_BMCR_RESET; 222 + } 223 + if (data & MII_BMCR_AUTOEN) { 224 + /* Complete autonegotiation immediately */ 225 + data &= ~MII_BMCR_AUTOEN; 226 + s->phy_regs[MII_BMSR] |= MII_BMSR_AN_COMP; 227 + } 228 + break; 229 + } 230 + 231 + s->phy_regs[reg_addr] = data; 232 + } 233 + 234 + static uint16_t read_from_phy(MSF2EmacState *s) 235 + { 236 + uint8_t reg_addr = s->regs[R_MII_ADDR] & R_MII_ADDR_REGADDR_MASK; 237 + uint8_t phy_addr = (s->regs[R_MII_ADDR] >> R_MII_ADDR_PHYADDR_SHIFT) & 238 + R_MII_ADDR_REGADDR_MASK; 239 + 240 + if (phy_addr == PHYADDR) { 241 + return s->phy_regs[reg_addr]; 242 + } else { 243 + return 0xFFFF; 244 + } 245 + } 246 + 247 + static void msf2_emac_do_reset(MSF2EmacState *s) 248 + { 249 + memset(&s->regs[0], 0, sizeof(s->regs)); 250 + s->regs[R_CFG1] = 0x80000000; 251 + s->regs[R_CFG2] = 0x00007000; 252 + s->regs[R_IFG] = 0x40605060; 253 + s->regs[R_HALF_DUPLEX] = 0x00A1F037; 254 + s->regs[R_MAX_FRAME_LENGTH] = 0x00000600; 255 + s->regs[R_FIFO_CFG5] = 0X3FFFF; 256 + 257 + msf2_phy_reset(s); 258 + } 259 + 260 + static uint64_t emac_read(void *opaque, hwaddr addr, unsigned int size) 261 + { 262 + MSF2EmacState *s = opaque; 263 + uint32_t r = 0; 264 + 265 + addr >>= 2; 266 + 267 + switch (addr) { 268 + case R_DMA_IRQ: 269 + r = emac_get_isr(s); 270 + break; 271 + default: 272 + if (addr >= ARRAY_SIZE(s->regs)) { 273 + qemu_log_mask(LOG_GUEST_ERROR, 274 + "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__, 275 + addr * 4); 276 + return r; 277 + } 278 + r = s->regs[addr]; 279 + break; 280 + } 281 + return r; 282 + } 283 + 284 + static void emac_write(void *opaque, hwaddr addr, uint64_t val64, 285 + unsigned int size) 286 + { 287 + MSF2EmacState *s = opaque; 288 + uint32_t value = val64; 289 + uint32_t enreqbits; 290 + uint8_t pktcnt; 291 + 292 + addr >>= 2; 293 + switch (addr) { 294 + case R_DMA_TX_CTL: 295 + s->regs[addr] = value; 296 + if (value & R_DMA_TX_CTL_EN_MASK) { 297 + msf2_dma_tx(s); 298 + } 299 + break; 300 + case R_DMA_RX_CTL: 301 + s->regs[addr] = value; 302 + if (value & R_DMA_RX_CTL_EN_MASK) { 303 + s->rx_desc = s->regs[R_DMA_RX_DESC]; 304 + qemu_flush_queued_packets(qemu_get_queue(s->nic)); 305 + } 306 + break; 307 + case R_CFG1: 308 + s->regs[addr] = value; 309 + if (value & R_CFG1_RESET_MASK) { 310 + msf2_emac_do_reset(s); 311 + } 312 + break; 313 + case R_FIFO_CFG0: 314 + /* 315 + * For our implementation, turning on modules is instantaneous, 316 + * so the states requested via the *ENREQ bits appear in the 317 + * *ENRPLY bits immediately. Also the reset bits to reset PE-MCXMAC 318 + * module are not emulated here since it deals with start of frames, 319 + * inter-packet gap and control frames. 320 + */ 321 + enreqbits = extract32(value, 8, 5); 322 + s->regs[addr] = deposit32(value, 16, 5, enreqbits); 323 + break; 324 + case R_DMA_TX_DESC: 325 + if (value & 0x3) { 326 + qemu_log_mask(LOG_GUEST_ERROR, "Tx Descriptor address should be" 327 + " 32 bit aligned\n"); 328 + } 329 + /* Ignore [1:0] bits */ 330 + s->regs[addr] = value & ~3; 331 + break; 332 + case R_DMA_RX_DESC: 333 + if (value & 0x3) { 334 + qemu_log_mask(LOG_GUEST_ERROR, "Rx Descriptor address should be" 335 + " 32 bit aligned\n"); 336 + } 337 + /* Ignore [1:0] bits */ 338 + s->regs[addr] = value & ~3; 339 + break; 340 + case R_DMA_TX_STATUS: 341 + if (value & R_DMA_TX_STATUS_UNDERRUN_MASK) { 342 + s->regs[addr] &= ~R_DMA_TX_STATUS_UNDERRUN_MASK; 343 + } 344 + if (value & R_DMA_TX_STATUS_PKT_SENT_MASK) { 345 + pktcnt = FIELD_EX32(s->regs[addr], DMA_TX_STATUS, PKTCNT); 346 + pktcnt--; 347 + s->regs[addr] = FIELD_DP32(s->regs[addr], DMA_TX_STATUS, 348 + PKTCNT, pktcnt); 349 + if (pktcnt == 0) { 350 + s->regs[addr] &= ~R_DMA_TX_STATUS_PKT_SENT_MASK; 351 + } 352 + } 353 + break; 354 + case R_DMA_RX_STATUS: 355 + if (value & R_DMA_RX_STATUS_OVERFLOW_MASK) { 356 + s->regs[addr] &= ~R_DMA_RX_STATUS_OVERFLOW_MASK; 357 + } 358 + if (value & R_DMA_RX_STATUS_PKT_RCVD_MASK) { 359 + pktcnt = FIELD_EX32(s->regs[addr], DMA_RX_STATUS, PKTCNT); 360 + pktcnt--; 361 + s->regs[addr] = FIELD_DP32(s->regs[addr], DMA_RX_STATUS, 362 + PKTCNT, pktcnt); 363 + if (pktcnt == 0) { 364 + s->regs[addr] &= ~R_DMA_RX_STATUS_PKT_RCVD_MASK; 365 + } 366 + } 367 + break; 368 + case R_DMA_IRQ: 369 + break; 370 + case R_MII_CMD: 371 + if (value & R_MII_CMD_READ_MASK) { 372 + s->regs[R_MII_STS] = read_from_phy(s); 373 + } 374 + break; 375 + case R_MII_CTL: 376 + s->regs[addr] = value; 377 + write_to_phy(s); 378 + break; 379 + case R_STA1: 380 + s->regs[addr] = value; 381 + /* 382 + * R_STA1 [31:24] : octet 1 of mac address 383 + * R_STA1 [23:16] : octet 2 of mac address 384 + * R_STA1 [15:8] : octet 3 of mac address 385 + * R_STA1 [7:0] : octet 4 of mac address 386 + */ 387 + stl_be_p(s->mac_addr, value); 388 + break; 389 + case R_STA2: 390 + s->regs[addr] = value; 391 + /* 392 + * R_STA2 [31:24] : octet 5 of mac address 393 + * R_STA2 [23:16] : octet 6 of mac address 394 + */ 395 + stw_be_p(s->mac_addr + 4, value >> 16); 396 + break; 397 + default: 398 + if (addr >= ARRAY_SIZE(s->regs)) { 399 + qemu_log_mask(LOG_GUEST_ERROR, 400 + "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__, 401 + addr * 4); 402 + return; 403 + } 404 + s->regs[addr] = value; 405 + break; 406 + } 407 + emac_update_irq(s); 408 + } 409 + 410 + static const MemoryRegionOps emac_ops = { 411 + .read = emac_read, 412 + .write = emac_write, 413 + .endianness = DEVICE_NATIVE_ENDIAN, 414 + .impl = { 415 + .min_access_size = 4, 416 + .max_access_size = 4 417 + } 418 + }; 419 + 420 + static bool emac_can_rx(NetClientState *nc) 421 + { 422 + MSF2EmacState *s = qemu_get_nic_opaque(nc); 423 + 424 + return (s->regs[R_CFG1] & R_CFG1_RX_EN_MASK) && 425 + (s->regs[R_DMA_RX_CTL] & R_DMA_RX_CTL_EN_MASK); 426 + } 427 + 428 + static bool addr_filter_ok(MSF2EmacState *s, const uint8_t *buf) 429 + { 430 + /* The broadcast MAC address: FF:FF:FF:FF:FF:FF */ 431 + const uint8_t broadcast_addr[] = { 0xFF, 0xFF, 0xFF, 0xFF, 432 + 0xFF, 0xFF }; 433 + bool bcast_en = true; 434 + bool mcast_en = true; 435 + 436 + if (s->regs[R_FIFO_CFG5] & R_FIFO_CFG5_BCAST_MASK) { 437 + bcast_en = true; /* Broadcast dont care for drop circuitry */ 438 + } else if (s->regs[R_FIFO_CFG4] & R_FIFO_CFG4_BCAST_MASK) { 439 + bcast_en = false; 440 + } 441 + 442 + if (s->regs[R_FIFO_CFG5] & R_FIFO_CFG5_MCAST_MASK) { 443 + mcast_en = true; /* Multicast dont care for drop circuitry */ 444 + } else if (s->regs[R_FIFO_CFG4] & R_FIFO_CFG4_MCAST_MASK) { 445 + mcast_en = false; 446 + } 447 + 448 + if (!memcmp(buf, broadcast_addr, sizeof(broadcast_addr))) { 449 + return bcast_en; 450 + } 451 + 452 + if (buf[0] & 1) { 453 + return mcast_en; 454 + } 455 + 456 + return !memcmp(buf, s->mac_addr, sizeof(s->mac_addr)); 457 + } 458 + 459 + static ssize_t emac_rx(NetClientState *nc, const uint8_t *buf, size_t size) 460 + { 461 + MSF2EmacState *s = qemu_get_nic_opaque(nc); 462 + EmacDesc d; 463 + uint8_t pktcnt; 464 + uint32_t status; 465 + 466 + if (size > (s->regs[R_MAX_FRAME_LENGTH] & 0xFFFF)) { 467 + return size; 468 + } 469 + if (!addr_filter_ok(s, buf)) { 470 + return size; 471 + } 472 + 473 + emac_load_desc(s, &d, s->rx_desc); 474 + 475 + if (d.pktsize & EMPTY_MASK) { 476 + address_space_write(&s->dma_as, d.pktaddr, MEMTXATTRS_UNSPECIFIED, 477 + buf, size & PKT_SIZE); 478 + d.pktsize = size & PKT_SIZE; 479 + emac_store_desc(s, &d, s->rx_desc); 480 + /* update received packets count */ 481 + status = s->regs[R_DMA_RX_STATUS]; 482 + pktcnt = FIELD_EX32(status, DMA_RX_STATUS, PKTCNT); 483 + pktcnt++; 484 + s->regs[R_DMA_RX_STATUS] = FIELD_DP32(status, DMA_RX_STATUS, 485 + PKTCNT, pktcnt); 486 + s->regs[R_DMA_RX_STATUS] |= R_DMA_RX_STATUS_PKT_RCVD_MASK; 487 + s->rx_desc = d.next; 488 + } else { 489 + s->regs[R_DMA_RX_CTL] &= ~R_DMA_RX_CTL_EN_MASK; 490 + s->regs[R_DMA_RX_STATUS] |= R_DMA_RX_STATUS_OVERFLOW_MASK; 491 + } 492 + emac_update_irq(s); 493 + return size; 494 + } 495 + 496 + static void msf2_emac_reset(DeviceState *dev) 497 + { 498 + MSF2EmacState *s = MSS_EMAC(dev); 499 + 500 + msf2_emac_do_reset(s); 501 + } 502 + 503 + static void emac_set_link(NetClientState *nc) 504 + { 505 + MSF2EmacState *s = qemu_get_nic_opaque(nc); 506 + 507 + msf2_phy_update_link(s); 508 + } 509 + 510 + static NetClientInfo net_msf2_emac_info = { 511 + .type = NET_CLIENT_DRIVER_NIC, 512 + .size = sizeof(NICState), 513 + .can_receive = emac_can_rx, 514 + .receive = emac_rx, 515 + .link_status_changed = emac_set_link, 516 + }; 517 + 518 + static void msf2_emac_realize(DeviceState *dev, Error **errp) 519 + { 520 + MSF2EmacState *s = MSS_EMAC(dev); 521 + 522 + if (!s->dma_mr) { 523 + error_setg(errp, "MSS_EMAC 'ahb-bus' link not set"); 524 + return; 525 + } 526 + 527 + address_space_init(&s->dma_as, s->dma_mr, "emac-ahb"); 528 + 529 + qemu_macaddr_default_if_unset(&s->conf.macaddr); 530 + s->nic = qemu_new_nic(&net_msf2_emac_info, &s->conf, 531 + object_get_typename(OBJECT(dev)), dev->id, s); 532 + qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a); 533 + } 534 + 535 + static void msf2_emac_init(Object *obj) 536 + { 537 + MSF2EmacState *s = MSS_EMAC(obj); 538 + 539 + sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq); 540 + 541 + memory_region_init_io(&s->mmio, obj, &emac_ops, s, 542 + "msf2-emac", R_MAX * 4); 543 + sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio); 544 + } 545 + 546 + static Property msf2_emac_properties[] = { 547 + DEFINE_PROP_LINK("ahb-bus", MSF2EmacState, dma_mr, 548 + TYPE_MEMORY_REGION, MemoryRegion *), 549 + DEFINE_NIC_PROPERTIES(MSF2EmacState, conf), 550 + DEFINE_PROP_END_OF_LIST(), 551 + }; 552 + 553 + static const VMStateDescription vmstate_msf2_emac = { 554 + .name = TYPE_MSS_EMAC, 555 + .version_id = 1, 556 + .minimum_version_id = 1, 557 + .fields = (VMStateField[]) { 558 + VMSTATE_UINT8_ARRAY(mac_addr, MSF2EmacState, ETH_ALEN), 559 + VMSTATE_UINT32(rx_desc, MSF2EmacState), 560 + VMSTATE_UINT16_ARRAY(phy_regs, MSF2EmacState, PHY_MAX_REGS), 561 + VMSTATE_UINT32_ARRAY(regs, MSF2EmacState, R_MAX), 562 + VMSTATE_END_OF_LIST() 563 + } 564 + }; 565 + 566 + static void msf2_emac_class_init(ObjectClass *klass, void *data) 567 + { 568 + DeviceClass *dc = DEVICE_CLASS(klass); 569 + 570 + dc->realize = msf2_emac_realize; 571 + dc->reset = msf2_emac_reset; 572 + dc->vmsd = &vmstate_msf2_emac; 573 + device_class_set_props(dc, msf2_emac_properties); 574 + } 575 + 576 + static const TypeInfo msf2_emac_info = { 577 + .name = TYPE_MSS_EMAC, 578 + .parent = TYPE_SYS_BUS_DEVICE, 579 + .instance_size = sizeof(MSF2EmacState), 580 + .instance_init = msf2_emac_init, 581 + .class_init = msf2_emac_class_init, 582 + }; 583 + 584 + static void msf2_emac_register_types(void) 585 + { 586 + type_register_static(&msf2_emac_info); 587 + } 588 + 589 + type_init(msf2_emac_register_types)
+2
include/hw/arm/msf2-soc.h
··· 29 29 #include "hw/timer/mss-timer.h" 30 30 #include "hw/misc/msf2-sysreg.h" 31 31 #include "hw/ssi/mss-spi.h" 32 + #include "hw/net/msf2-emac.h" 32 33 33 34 #define TYPE_MSF2_SOC "msf2-soc" 34 35 #define MSF2_SOC(obj) OBJECT_CHECK(MSF2State, (obj), TYPE_MSF2_SOC) ··· 62 63 MSF2SysregState sysreg; 63 64 MSSTimerState timer; 64 65 MSSSpiState spi[MSF2_NUM_SPIS]; 66 + MSF2EmacState emac; 65 67 } MSF2State; 66 68 67 69 #endif
+1
include/hw/char/cadence_uart.h
··· 49 49 CharBackend chr; 50 50 qemu_irq irq; 51 51 QEMUTimer *fifo_trigger_handle; 52 + Clock *refclk; 52 53 } CadenceUARTState; 53 54 54 55 static inline DeviceState *cadence_uart_create(hwaddr addr,
+225
include/hw/clock.h
··· 1 + /* 2 + * Hardware Clocks 3 + * 4 + * Copyright GreenSocs 2016-2020 5 + * 6 + * Authors: 7 + * Frederic Konrad 8 + * Damien Hedde 9 + * 10 + * This work is licensed under the terms of the GNU GPL, version 2 or later. 11 + * See the COPYING file in the top-level directory. 12 + */ 13 + 14 + #ifndef QEMU_HW_CLOCK_H 15 + #define QEMU_HW_CLOCK_H 16 + 17 + #include "qom/object.h" 18 + #include "qemu/queue.h" 19 + 20 + #define TYPE_CLOCK "clock" 21 + #define CLOCK(obj) OBJECT_CHECK(Clock, (obj), TYPE_CLOCK) 22 + 23 + typedef void ClockCallback(void *opaque); 24 + 25 + /* 26 + * clock store a value representing the clock's period in 2^-32ns unit. 27 + * It can represent: 28 + * + periods from 2^-32ns up to 4seconds 29 + * + frequency from ~0.25Hz 2e10Ghz 30 + * Resolution of frequency representation decreases with frequency: 31 + * + at 100MHz, resolution is ~2mHz 32 + * + at 1Ghz, resolution is ~0.2Hz 33 + * + at 10Ghz, resolution is ~20Hz 34 + */ 35 + #define CLOCK_PERIOD_1SEC (1000000000llu << 32) 36 + 37 + /* 38 + * macro helpers to convert to hertz / nanosecond 39 + */ 40 + #define CLOCK_PERIOD_FROM_NS(ns) ((ns) * (CLOCK_PERIOD_1SEC / 1000000000llu)) 41 + #define CLOCK_PERIOD_TO_NS(per) ((per) / (CLOCK_PERIOD_1SEC / 1000000000llu)) 42 + #define CLOCK_PERIOD_FROM_HZ(hz) (((hz) != 0) ? CLOCK_PERIOD_1SEC / (hz) : 0u) 43 + #define CLOCK_PERIOD_TO_HZ(per) (((per) != 0) ? CLOCK_PERIOD_1SEC / (per) : 0u) 44 + 45 + /** 46 + * Clock: 47 + * @parent_obj: parent class 48 + * @period: unsigned integer representing the period of the clock 49 + * @canonical_path: clock path string cache (used for trace purpose) 50 + * @callback: called when clock changes 51 + * @callback_opaque: argument for @callback 52 + * @source: source (or parent in clock tree) of the clock 53 + * @children: list of clocks connected to this one (it is their source) 54 + * @sibling: structure used to form a clock list 55 + */ 56 + 57 + typedef struct Clock Clock; 58 + 59 + struct Clock { 60 + /*< private >*/ 61 + Object parent_obj; 62 + 63 + /* all fields are private and should not be modified directly */ 64 + 65 + /* fields */ 66 + uint64_t period; 67 + char *canonical_path; 68 + ClockCallback *callback; 69 + void *callback_opaque; 70 + 71 + /* Clocks are organized in a clock tree */ 72 + Clock *source; 73 + QLIST_HEAD(, Clock) children; 74 + QLIST_ENTRY(Clock) sibling; 75 + }; 76 + 77 + /* 78 + * vmstate description entry to be added in device vmsd. 79 + */ 80 + extern const VMStateDescription vmstate_clock; 81 + #define VMSTATE_CLOCK(field, state) \ 82 + VMSTATE_CLOCK_V(field, state, 0) 83 + #define VMSTATE_CLOCK_V(field, state, version) \ 84 + VMSTATE_STRUCT_POINTER_V(field, state, version, vmstate_clock, Clock) 85 + 86 + /** 87 + * clock_setup_canonical_path: 88 + * @clk: clock 89 + * 90 + * compute the canonical path of the clock (used by log messages) 91 + */ 92 + void clock_setup_canonical_path(Clock *clk); 93 + 94 + /** 95 + * clock_set_callback: 96 + * @clk: the clock to register the callback into 97 + * @cb: the callback function 98 + * @opaque: the argument to the callback 99 + * 100 + * Register a callback called on every clock update. 101 + */ 102 + void clock_set_callback(Clock *clk, ClockCallback *cb, void *opaque); 103 + 104 + /** 105 + * clock_clear_callback: 106 + * @clk: the clock to delete the callback from 107 + * 108 + * Unregister the callback registered with clock_set_callback. 109 + */ 110 + void clock_clear_callback(Clock *clk); 111 + 112 + /** 113 + * clock_set_source: 114 + * @clk: the clock. 115 + * @src: the source clock 116 + * 117 + * Setup @src as the clock source of @clk. The current @src period 118 + * value is also copied to @clk and its subtree but no callback is 119 + * called. 120 + * Further @src update will be propagated to @clk and its subtree. 121 + */ 122 + void clock_set_source(Clock *clk, Clock *src); 123 + 124 + /** 125 + * clock_set: 126 + * @clk: the clock to initialize. 127 + * @value: the clock's value, 0 means unclocked 128 + * 129 + * Set the local cached period value of @clk to @value. 130 + */ 131 + void clock_set(Clock *clk, uint64_t value); 132 + 133 + static inline void clock_set_hz(Clock *clk, unsigned hz) 134 + { 135 + clock_set(clk, CLOCK_PERIOD_FROM_HZ(hz)); 136 + } 137 + 138 + static inline void clock_set_ns(Clock *clk, unsigned ns) 139 + { 140 + clock_set(clk, CLOCK_PERIOD_FROM_NS(ns)); 141 + } 142 + 143 + /** 144 + * clock_propagate: 145 + * @clk: the clock 146 + * 147 + * Propagate the clock period that has been previously configured using 148 + * @clock_set(). This will update recursively all connected clocks. 149 + * It is an error to call this function on a clock which has a source. 150 + * Note: this function must not be called during device inititialization 151 + * or migration. 152 + */ 153 + void clock_propagate(Clock *clk); 154 + 155 + /** 156 + * clock_update: 157 + * @clk: the clock to update. 158 + * @value: the new clock's value, 0 means unclocked 159 + * 160 + * Update the @clk to the new @value. All connected clocks will be informed 161 + * of this update. This is equivalent to call @clock_set() then 162 + * @clock_propagate(). 163 + */ 164 + static inline void clock_update(Clock *clk, uint64_t value) 165 + { 166 + clock_set(clk, value); 167 + clock_propagate(clk); 168 + } 169 + 170 + static inline void clock_update_hz(Clock *clk, unsigned hz) 171 + { 172 + clock_update(clk, CLOCK_PERIOD_FROM_HZ(hz)); 173 + } 174 + 175 + static inline void clock_update_ns(Clock *clk, unsigned ns) 176 + { 177 + clock_update(clk, CLOCK_PERIOD_FROM_NS(ns)); 178 + } 179 + 180 + /** 181 + * clock_get: 182 + * @clk: the clk to fetch the clock 183 + * 184 + * @return: the current period. 185 + */ 186 + static inline uint64_t clock_get(const Clock *clk) 187 + { 188 + return clk->period; 189 + } 190 + 191 + static inline unsigned clock_get_hz(Clock *clk) 192 + { 193 + return CLOCK_PERIOD_TO_HZ(clock_get(clk)); 194 + } 195 + 196 + static inline unsigned clock_get_ns(Clock *clk) 197 + { 198 + return CLOCK_PERIOD_TO_NS(clock_get(clk)); 199 + } 200 + 201 + /** 202 + * clock_is_enabled: 203 + * @clk: a clock 204 + * 205 + * @return: true if the clock is running. 206 + */ 207 + static inline bool clock_is_enabled(const Clock *clk) 208 + { 209 + return clock_get(clk) != 0; 210 + } 211 + 212 + static inline void clock_init(Clock *clk, uint64_t value) 213 + { 214 + clock_set(clk, value); 215 + } 216 + static inline void clock_init_hz(Clock *clk, uint64_t value) 217 + { 218 + clock_set_hz(clk, value); 219 + } 220 + static inline void clock_init_ns(Clock *clk, uint64_t value) 221 + { 222 + clock_set_ns(clk, value); 223 + } 224 + 225 + #endif /* QEMU_HW_CLOCK_H */
+1 -1
include/hw/gpio/nrf51_gpio.h
··· 42 42 #define NRF51_GPIO_REG_DIRSET 0x518 43 43 #define NRF51_GPIO_REG_DIRCLR 0x51C 44 44 #define NRF51_GPIO_REG_CNF_START 0x700 45 - #define NRF51_GPIO_REG_CNF_END 0x77F 45 + #define NRF51_GPIO_REG_CNF_END 0x77C 46 46 47 47 #define NRF51_GPIO_PULLDOWN 1 48 48 #define NRF51_GPIO_PULLUP 3
+53
include/hw/net/msf2-emac.h
··· 1 + /* 2 + * QEMU model of the Smartfusion2 Ethernet MAC. 3 + * 4 + * Copyright (c) 2020 Subbaraya Sundeep <sundeep.lkml@gmail.com>. 5 + * 6 + * Permission is hereby granted, free of charge, to any person obtaining a copy 7 + * of this software and associated documentation files (the "Software"), to deal 8 + * in the Software without restriction, including without limitation the rights 9 + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 + * copies of the Software, and to permit persons to whom the Software is 11 + * furnished to do so, subject to the following conditions: 12 + * 13 + * The above copyright notice and this permission notice shall be included in 14 + * all copies or substantial portions of the Software. 15 + * 16 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 19 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 22 + * THE SOFTWARE. 23 + */ 24 + 25 + #include "hw/sysbus.h" 26 + #include "exec/memory.h" 27 + #include "net/net.h" 28 + #include "net/eth.h" 29 + 30 + #define TYPE_MSS_EMAC "msf2-emac" 31 + #define MSS_EMAC(obj) \ 32 + OBJECT_CHECK(MSF2EmacState, (obj), TYPE_MSS_EMAC) 33 + 34 + #define R_MAX (0x1a0 / 4) 35 + #define PHY_MAX_REGS 32 36 + 37 + typedef struct MSF2EmacState { 38 + SysBusDevice parent; 39 + 40 + MemoryRegion mmio; 41 + MemoryRegion *dma_mr; 42 + AddressSpace dma_as; 43 + 44 + qemu_irq irq; 45 + NICState *nic; 46 + NICConf conf; 47 + 48 + uint8_t mac_addr[ETH_ALEN]; 49 + uint32_t rx_desc; 50 + uint16_t phy_regs[PHY_MAX_REGS]; 51 + 52 + uint32_t regs[R_MAX]; 53 + } MSF2EmacState;
+159
include/hw/qdev-clock.h
··· 1 + /* 2 + * Device's clock input and output 3 + * 4 + * Copyright GreenSocs 2016-2020 5 + * 6 + * Authors: 7 + * Frederic Konrad 8 + * Damien Hedde 9 + * 10 + * This work is licensed under the terms of the GNU GPL, version 2 or later. 11 + * See the COPYING file in the top-level directory. 12 + */ 13 + 14 + #ifndef QDEV_CLOCK_H 15 + #define QDEV_CLOCK_H 16 + 17 + #include "hw/clock.h" 18 + 19 + /** 20 + * qdev_init_clock_in: 21 + * @dev: the device to add an input clock to 22 + * @name: the name of the clock (can't be NULL). 23 + * @callback: optional callback to be called on update or NULL. 24 + * @opaque: argument for the callback 25 + * @returns: a pointer to the newly added clock 26 + * 27 + * Add an input clock to device @dev as a clock named @name. 28 + * This adds a child<> property. 29 + * The callback will be called with @opaque as opaque parameter. 30 + */ 31 + Clock *qdev_init_clock_in(DeviceState *dev, const char *name, 32 + ClockCallback *callback, void *opaque); 33 + 34 + /** 35 + * qdev_init_clock_out: 36 + * @dev: the device to add an output clock to 37 + * @name: the name of the clock (can't be NULL). 38 + * @returns: a pointer to the newly added clock 39 + * 40 + * Add an output clock to device @dev as a clock named @name. 41 + * This adds a child<> property. 42 + */ 43 + Clock *qdev_init_clock_out(DeviceState *dev, const char *name); 44 + 45 + /** 46 + * qdev_get_clock_in: 47 + * @dev: the device which has the clock 48 + * @name: the name of the clock (can't be NULL). 49 + * @returns: a pointer to the clock 50 + * 51 + * Get the input clock @name from @dev or NULL if does not exist. 52 + */ 53 + Clock *qdev_get_clock_in(DeviceState *dev, const char *name); 54 + 55 + /** 56 + * qdev_get_clock_out: 57 + * @dev: the device which has the clock 58 + * @name: the name of the clock (can't be NULL). 59 + * @returns: a pointer to the clock 60 + * 61 + * Get the output clock @name from @dev or NULL if does not exist. 62 + */ 63 + Clock *qdev_get_clock_out(DeviceState *dev, const char *name); 64 + 65 + /** 66 + * qdev_connect_clock_in: 67 + * @dev: a device 68 + * @name: the name of an input clock in @dev 69 + * @source: the source clock (an output clock of another device for example) 70 + * 71 + * Set the source clock of input clock @name of device @dev to @source. 72 + * @source period update will be propagated to @name clock. 73 + */ 74 + static inline void qdev_connect_clock_in(DeviceState *dev, const char *name, 75 + Clock *source) 76 + { 77 + clock_set_source(qdev_get_clock_in(dev, name), source); 78 + } 79 + 80 + /** 81 + * qdev_alias_clock: 82 + * @dev: the device which has the clock 83 + * @name: the name of the clock in @dev (can't be NULL) 84 + * @alias_dev: the device to add the clock 85 + * @alias_name: the name of the clock in @container 86 + * @returns: a pointer to the clock 87 + * 88 + * Add a clock @alias_name in @alias_dev which is an alias of the clock @name 89 + * in @dev. The direction _in_ or _out_ will the same as the original. 90 + * An alias clock must not be modified or used by @alias_dev and should 91 + * typically be only only for device composition purpose. 92 + */ 93 + Clock *qdev_alias_clock(DeviceState *dev, const char *name, 94 + DeviceState *alias_dev, const char *alias_name); 95 + 96 + /** 97 + * qdev_finalize_clocklist: 98 + * @dev: the device being finalized 99 + * 100 + * Clear the clocklist from @dev. Only used internally in qdev. 101 + */ 102 + void qdev_finalize_clocklist(DeviceState *dev); 103 + 104 + /** 105 + * ClockPortInitElem: 106 + * @name: name of the clock (can't be NULL) 107 + * @output: indicates whether the clock is input or output 108 + * @callback: for inputs, optional callback to be called on clock's update 109 + * with device as opaque 110 + * @offset: optional offset to store the ClockIn or ClockOut pointer in device 111 + * state structure (0 means unused) 112 + */ 113 + struct ClockPortInitElem { 114 + const char *name; 115 + bool is_output; 116 + ClockCallback *callback; 117 + size_t offset; 118 + }; 119 + 120 + #define clock_offset_value(devstate, field) \ 121 + (offsetof(devstate, field) + \ 122 + type_check(Clock *, typeof_field(devstate, field))) 123 + 124 + #define QDEV_CLOCK(out_not_in, devstate, field, cb) { \ 125 + .name = (stringify(field)), \ 126 + .is_output = out_not_in, \ 127 + .callback = cb, \ 128 + .offset = clock_offset_value(devstate, field), \ 129 + } 130 + 131 + /** 132 + * QDEV_CLOCK_(IN|OUT): 133 + * @devstate: structure type. @dev argument of qdev_init_clocks below must be 134 + * a pointer to that same type. 135 + * @field: a field in @_devstate (must be Clock*) 136 + * @callback: (for input only) callback (or NULL) to be called with the device 137 + * state as argument 138 + * 139 + * The name of the clock will be derived from @field 140 + */ 141 + #define QDEV_CLOCK_IN(devstate, field, callback) \ 142 + QDEV_CLOCK(false, devstate, field, callback) 143 + 144 + #define QDEV_CLOCK_OUT(devstate, field) \ 145 + QDEV_CLOCK(true, devstate, field, NULL) 146 + 147 + #define QDEV_CLOCK_END { .name = NULL } 148 + 149 + typedef struct ClockPortInitElem ClockPortInitArray[]; 150 + 151 + /** 152 + * qdev_init_clocks: 153 + * @dev: the device to add clocks to 154 + * @clocks: a QDEV_CLOCK_END-terminated array which contains the 155 + * clocks information. 156 + */ 157 + void qdev_init_clocks(DeviceState *dev, const ClockPortInitArray clocks); 158 + 159 + #endif /* QDEV_CLOCK_H */
+12
include/hw/qdev-core.h
··· 149 149 QLIST_ENTRY(NamedGPIOList) node; 150 150 }; 151 151 152 + typedef struct Clock Clock; 153 + typedef struct NamedClockList NamedClockList; 154 + 155 + struct NamedClockList { 156 + char *name; 157 + Clock *clock; 158 + bool output; 159 + bool alias; 160 + QLIST_ENTRY(NamedClockList) node; 161 + }; 162 + 152 163 /** 153 164 * DeviceState: 154 165 * @realized: Indicates whether the device has been fully constructed. ··· 171 182 bool allow_unplug_during_migration; 172 183 BusState *parent_bus; 173 184 QLIST_HEAD(, NamedGPIOList) gpios; 185 + QLIST_HEAD(, NamedClockList) clocks; 174 186 QLIST_HEAD(, BusState) child_bus; 175 187 int num_child_bus; 176 188 int instance_id_alias;
+4 -1
include/sysemu/device_tree.h
··· 39 39 * NULL. If there is no error but no matching node was found, the 40 40 * returned array contains a single element equal to NULL. If an error 41 41 * was encountered when parsing the blob, the function returns NULL 42 + * 43 + * @name may be NULL to wildcard names and only match compatibility 44 + * strings. 42 45 */ 43 - char **qemu_fdt_node_path(void *fdt, const char *name, char *compat, 46 + char **qemu_fdt_node_path(void *fdt, const char *name, const char *compat, 44 47 Error **errp); 45 48 46 49 /**
+9
qdev-monitor.c
··· 38 38 #include "migration/misc.h" 39 39 #include "migration/migration.h" 40 40 #include "qemu/cutils.h" 41 + #include "hw/clock.h" 41 42 42 43 /* 43 44 * Aliases were a bad idea from the start. Let's keep them ··· 737 738 ObjectClass *class; 738 739 BusState *child; 739 740 NamedGPIOList *ngl; 741 + NamedClockList *ncl; 740 742 741 743 qdev_printf("dev: %s, id \"%s\"\n", object_get_typename(OBJECT(dev)), 742 744 dev->id ? dev->id : ""); ··· 750 752 qdev_printf("gpio-out \"%s\" %d\n", ngl->name ? ngl->name : "", 751 753 ngl->num_out); 752 754 } 755 + } 756 + QLIST_FOREACH(ncl, &dev->clocks, node) { 757 + qdev_printf("clock-%s%s \"%s\" freq_hz=%e\n", 758 + ncl->output ? "out" : "in", 759 + ncl->alias ? " (alias)" : "", 760 + ncl->name, 761 + CLOCK_PERIOD_TO_HZ(1.0 * clock_get(ncl->clock))); 753 762 } 754 763 class = object_get_class(OBJECT(dev)); 755 764 do {
+8 -1
target/arm/cpu-qom.h
··· 35 35 36 36 #define TYPE_ARM_MAX_CPU "max-" TYPE_ARM_CPU 37 37 38 - typedef struct ARMCPUInfo ARMCPUInfo; 38 + typedef struct ARMCPUInfo { 39 + const char *name; 40 + void (*initfn)(Object *obj); 41 + void (*class_init)(ObjectClass *oc, void *data); 42 + } ARMCPUInfo; 43 + 44 + void arm_cpu_register(const ARMCPUInfo *info); 45 + void aarch64_cpu_register(const ARMCPUInfo *info); 39 46 40 47 /** 41 48 * ARMCPUClass:
+8 -11
target/arm/cpu.c
··· 582 582 CPUARMState *env = &cpu->env; 583 583 bool ret = false; 584 584 585 - /* ARMv7-M interrupt masking works differently than -A or -R. 585 + /* 586 + * ARMv7-M interrupt masking works differently than -A or -R. 586 587 * There is no FIQ/IRQ distinction. Instead of I and F bits 587 588 * masking FIQ and IRQ interrupts, an exception is taken only 588 589 * if it is higher priority than the current execution priority ··· 1912 1913 static void arm1136_r2_initfn(Object *obj) 1913 1914 { 1914 1915 ARMCPU *cpu = ARM_CPU(obj); 1915 - /* What qemu calls "arm1136_r2" is actually the 1136 r0p2, ie an 1916 + /* 1917 + * What qemu calls "arm1136_r2" is actually the 1136 r0p2, ie an 1916 1918 * older core than plain "arm1136". In particular this does not 1917 1919 * have the v6K features. 1918 1920 * These ID register values are correct for 1136 but may be wrong ··· 2693 2695 2694 2696 #endif /* !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) */ 2695 2697 2696 - struct ARMCPUInfo { 2697 - const char *name; 2698 - void (*initfn)(Object *obj); 2699 - void (*class_init)(ObjectClass *oc, void *data); 2700 - }; 2701 - 2702 2698 static const ARMCPUInfo arm_cpus[] = { 2703 2699 #if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) 2704 2700 { .name = "arm926", .initfn = arm926_initfn }, 2705 2701 { .name = "arm946", .initfn = arm946_initfn }, 2706 2702 { .name = "arm1026", .initfn = arm1026_initfn }, 2707 - /* What QEMU calls "arm1136-r2" is actually the 1136 r0p2, i.e. an 2703 + /* 2704 + * What QEMU calls "arm1136-r2" is actually the 1136 r0p2, i.e. an 2708 2705 * older core than plain "arm1136". In particular this does not 2709 2706 * have the v6K features. 2710 2707 */ ··· 2864 2861 acc->info = data; 2865 2862 } 2866 2863 2867 - static void cpu_register(const ARMCPUInfo *info) 2864 + void arm_cpu_register(const ARMCPUInfo *info) 2868 2865 { 2869 2866 TypeInfo type_info = { 2870 2867 .parent = TYPE_ARM_CPU, ··· 2905 2902 type_register_static(&idau_interface_type_info); 2906 2903 2907 2904 while (info->name) { 2908 - cpu_register(info); 2905 + arm_cpu_register(info); 2909 2906 info++; 2910 2907 } 2911 2908
+1 -7
target/arm/cpu64.c
··· 737 737 cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal); 738 738 } 739 739 740 - struct ARMCPUInfo { 741 - const char *name; 742 - void (*initfn)(Object *obj); 743 - void (*class_init)(ObjectClass *oc, void *data); 744 - }; 745 - 746 740 static const ARMCPUInfo aarch64_cpus[] = { 747 741 { .name = "cortex-a57", .initfn = aarch64_a57_initfn }, 748 742 { .name = "cortex-a53", .initfn = aarch64_a53_initfn }, ··· 825 819 acc->info = data; 826 820 } 827 821 828 - static void aarch64_cpu_register(const ARMCPUInfo *info) 822 + void aarch64_cpu_register(const ARMCPUInfo *info) 829 823 { 830 824 TypeInfo type_info = { 831 825 .parent = TYPE_AARCH64_CPU,
+17
target/arm/helper.c
··· 3442 3442 return CP_ACCESS_OK; 3443 3443 } 3444 3444 3445 + #ifdef CONFIG_TCG 3445 3446 static uint64_t do_ats_write(CPUARMState *env, uint64_t value, 3446 3447 MMUAccessType access_type, ARMMMUIdx mmu_idx) 3447 3448 { ··· 3602 3603 } 3603 3604 return par64; 3604 3605 } 3606 + #endif /* CONFIG_TCG */ 3605 3607 3606 3608 static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) 3607 3609 { 3610 + #ifdef CONFIG_TCG 3608 3611 MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; 3609 3612 uint64_t par64; 3610 3613 ARMMMUIdx mmu_idx; ··· 3664 3667 par64 = do_ats_write(env, value, access_type, mmu_idx); 3665 3668 3666 3669 A32_BANKED_CURRENT_REG_SET(env, par, par64); 3670 + #else 3671 + /* Handled by hardware accelerator. */ 3672 + g_assert_not_reached(); 3673 + #endif /* CONFIG_TCG */ 3667 3674 } 3668 3675 3669 3676 static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri, 3670 3677 uint64_t value) 3671 3678 { 3679 + #ifdef CONFIG_TCG 3672 3680 MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; 3673 3681 uint64_t par64; 3674 3682 3675 3683 par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2); 3676 3684 3677 3685 A32_BANKED_CURRENT_REG_SET(env, par, par64); 3686 + #else 3687 + /* Handled by hardware accelerator. */ 3688 + g_assert_not_reached(); 3689 + #endif /* CONFIG_TCG */ 3678 3690 } 3679 3691 3680 3692 static CPAccessResult at_s1e2_access(CPUARMState *env, const ARMCPRegInfo *ri, ··· 3689 3701 static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, 3690 3702 uint64_t value) 3691 3703 { 3704 + #ifdef CONFIG_TCG 3692 3705 MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; 3693 3706 ARMMMUIdx mmu_idx; 3694 3707 int secure = arm_is_secure_below_el3(env); ··· 3728 3741 } 3729 3742 3730 3743 env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx); 3744 + #else 3745 + /* Handled by hardware accelerator. */ 3746 + g_assert_not_reached(); 3747 + #endif /* CONFIG_TCG */ 3731 3748 } 3732 3749 #endif 3733 3750
+11 -16
target/arm/helper.h
··· 275 275 DEF_HELPER_2(neon_hsub_s32, s32, s32, s32) 276 276 DEF_HELPER_2(neon_hsub_u32, i32, i32, i32) 277 277 278 - DEF_HELPER_2(neon_cgt_u8, i32, i32, i32) 279 - DEF_HELPER_2(neon_cgt_s8, i32, i32, i32) 280 - DEF_HELPER_2(neon_cgt_u16, i32, i32, i32) 281 - DEF_HELPER_2(neon_cgt_s16, i32, i32, i32) 282 - DEF_HELPER_2(neon_cgt_u32, i32, i32, i32) 283 - DEF_HELPER_2(neon_cgt_s32, i32, i32, i32) 284 - DEF_HELPER_2(neon_cge_u8, i32, i32, i32) 285 - DEF_HELPER_2(neon_cge_s8, i32, i32, i32) 286 - DEF_HELPER_2(neon_cge_u16, i32, i32, i32) 287 - DEF_HELPER_2(neon_cge_s16, i32, i32, i32) 288 - DEF_HELPER_2(neon_cge_u32, i32, i32, i32) 289 - DEF_HELPER_2(neon_cge_s32, i32, i32, i32) 290 - 291 278 DEF_HELPER_2(neon_pmin_u8, i32, i32, i32) 292 279 DEF_HELPER_2(neon_pmin_s8, i32, i32, i32) 293 280 DEF_HELPER_2(neon_pmin_u16, i32, i32, i32) ··· 347 334 DEF_HELPER_2(neon_tst_u8, i32, i32, i32) 348 335 DEF_HELPER_2(neon_tst_u16, i32, i32, i32) 349 336 DEF_HELPER_2(neon_tst_u32, i32, i32, i32) 350 - DEF_HELPER_2(neon_ceq_u8, i32, i32, i32) 351 - DEF_HELPER_2(neon_ceq_u16, i32, i32, i32) 352 - DEF_HELPER_2(neon_ceq_u32, i32, i32, i32) 353 337 354 338 DEF_HELPER_1(neon_clz_u8, i32, i32) 355 339 DEF_HELPER_1(neon_clz_u16, i32, i32) ··· 685 669 DEF_HELPER_FLAGS_2(frint64_s, TCG_CALL_NO_RWG, f32, f32, ptr) 686 670 DEF_HELPER_FLAGS_2(frint32_d, TCG_CALL_NO_RWG, f64, f64, ptr) 687 671 DEF_HELPER_FLAGS_2(frint64_d, TCG_CALL_NO_RWG, f64, f64, ptr) 672 + 673 + DEF_HELPER_FLAGS_3(gvec_ceq0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 674 + DEF_HELPER_FLAGS_3(gvec_ceq0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 675 + DEF_HELPER_FLAGS_3(gvec_clt0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 676 + DEF_HELPER_FLAGS_3(gvec_clt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 677 + DEF_HELPER_FLAGS_3(gvec_cle0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 678 + DEF_HELPER_FLAGS_3(gvec_cle0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 679 + DEF_HELPER_FLAGS_3(gvec_cgt0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 680 + DEF_HELPER_FLAGS_3(gvec_cgt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 681 + DEF_HELPER_FLAGS_3(gvec_cge0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 682 + DEF_HELPER_FLAGS_3(gvec_cge0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32) 688 683 689 684 DEF_HELPER_FLAGS_4(gvec_sshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32) 690 685 DEF_HELPER_FLAGS_4(gvec_sshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
-24
target/arm/neon_helper.c
··· 562 562 return dest; 563 563 } 564 564 565 - #define NEON_FN(dest, src1, src2) dest = (src1 > src2) ? ~0 : 0 566 - NEON_VOP(cgt_s8, neon_s8, 4) 567 - NEON_VOP(cgt_u8, neon_u8, 4) 568 - NEON_VOP(cgt_s16, neon_s16, 2) 569 - NEON_VOP(cgt_u16, neon_u16, 2) 570 - NEON_VOP(cgt_s32, neon_s32, 1) 571 - NEON_VOP(cgt_u32, neon_u32, 1) 572 - #undef NEON_FN 573 - 574 - #define NEON_FN(dest, src1, src2) dest = (src1 >= src2) ? ~0 : 0 575 - NEON_VOP(cge_s8, neon_s8, 4) 576 - NEON_VOP(cge_u8, neon_u8, 4) 577 - NEON_VOP(cge_s16, neon_s16, 2) 578 - NEON_VOP(cge_u16, neon_u16, 2) 579 - NEON_VOP(cge_s32, neon_s32, 1) 580 - NEON_VOP(cge_u32, neon_u32, 1) 581 - #undef NEON_FN 582 - 583 565 #define NEON_FN(dest, src1, src2) dest = (src1 < src2) ? src1 : src2 584 566 NEON_POP(pmin_s8, neon_s8, 4) 585 567 NEON_POP(pmin_u8, neon_u8, 4) ··· 1133 1115 NEON_VOP(tst_u8, neon_u8, 4) 1134 1116 NEON_VOP(tst_u16, neon_u16, 2) 1135 1117 NEON_VOP(tst_u32, neon_u32, 1) 1136 - #undef NEON_FN 1137 - 1138 - #define NEON_FN(dest, src1, src2) dest = (src1 == src2) ? -1 : 0 1139 - NEON_VOP(ceq_u8, neon_u8, 4) 1140 - NEON_VOP(ceq_u16, neon_u16, 2) 1141 - NEON_VOP(ceq_u32, neon_u32, 1) 1142 1118 #undef NEON_FN 1143 1119 1144 1120 /* Count Leading Sign/Zero Bits. */
+17 -47
target/arm/translate-a64.c
··· 594 594 is_q ? 16 : 8, vec_full_reg_size(s)); 595 595 } 596 596 597 + /* Expand a 2-operand AdvSIMD vector operation using an op descriptor. */ 598 + static void gen_gvec_op2(DisasContext *s, bool is_q, int rd, 599 + int rn, const GVecGen2 *gvec_op) 600 + { 601 + tcg_gen_gvec_2(vec_full_reg_offset(s, rd), vec_full_reg_offset(s, rn), 602 + is_q ? 16 : 8, vec_full_reg_size(s), gvec_op); 603 + } 604 + 597 605 /* Expand a 2-operand + immediate AdvSIMD vector operation using 598 606 * an op descriptor. 599 607 */ ··· 12366 12374 return; 12367 12375 } 12368 12376 break; 12377 + case 0x8: /* CMGT, CMGE */ 12378 + gen_gvec_op2(s, is_q, rd, rn, u ? &cge0_op[size] : &cgt0_op[size]); 12379 + return; 12380 + case 0x9: /* CMEQ, CMLE */ 12381 + gen_gvec_op2(s, is_q, rd, rn, u ? &cle0_op[size] : &ceq0_op[size]); 12382 + return; 12383 + case 0xa: /* CMLT */ 12384 + gen_gvec_op2(s, is_q, rd, rn, &clt0_op[size]); 12385 + return; 12369 12386 case 0xb: 12370 12387 if (u) { /* ABS, NEG */ 12371 12388 gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_neg, size); ··· 12403 12420 for (pass = 0; pass < (is_q ? 4 : 2); pass++) { 12404 12421 TCGv_i32 tcg_op = tcg_temp_new_i32(); 12405 12422 TCGv_i32 tcg_res = tcg_temp_new_i32(); 12406 - TCGCond cond; 12407 12423 12408 12424 read_vec_element_i32(s, tcg_op, rn, pass, MO_32); 12409 12425 12410 12426 if (size == 2) { 12411 12427 /* Special cases for 32 bit elements */ 12412 12428 switch (opcode) { 12413 - case 0xa: /* CMLT */ 12414 - /* 32 bit integer comparison against zero, result is 12415 - * test ? (2^32 - 1) : 0. We implement via setcond(test) 12416 - * and inverting. 12417 - */ 12418 - cond = TCG_COND_LT; 12419 - do_cmop: 12420 - tcg_gen_setcondi_i32(cond, tcg_res, tcg_op, 0); 12421 - tcg_gen_neg_i32(tcg_res, tcg_res); 12422 - break; 12423 - case 0x8: /* CMGT, CMGE */ 12424 - cond = u ? TCG_COND_GE : TCG_COND_GT; 12425 - goto do_cmop; 12426 - case 0x9: /* CMEQ, CMLE */ 12427 - cond = u ? TCG_COND_LE : TCG_COND_EQ; 12428 - goto do_cmop; 12429 12429 case 0x4: /* CLS */ 12430 12430 if (u) { 12431 12431 tcg_gen_clzi_i32(tcg_res, tcg_op, 32); ··· 12520 12520 }; 12521 12521 genfn = fns[size][u]; 12522 12522 genfn(tcg_res, cpu_env, tcg_op); 12523 - break; 12524 - } 12525 - case 0x8: /* CMGT, CMGE */ 12526 - case 0x9: /* CMEQ, CMLE */ 12527 - case 0xa: /* CMLT */ 12528 - { 12529 - static NeonGenTwoOpFn * const fns[3][2] = { 12530 - { gen_helper_neon_cgt_s8, gen_helper_neon_cgt_s16 }, 12531 - { gen_helper_neon_cge_s8, gen_helper_neon_cge_s16 }, 12532 - { gen_helper_neon_ceq_u8, gen_helper_neon_ceq_u16 }, 12533 - }; 12534 - NeonGenTwoOpFn *genfn; 12535 - int comp; 12536 - bool reverse; 12537 - TCGv_i32 tcg_zero = tcg_const_i32(0); 12538 - 12539 - /* comp = index into [CMGT, CMGE, CMEQ, CMLE, CMLT] */ 12540 - comp = (opcode - 0x8) * 2 + u; 12541 - /* ...but LE, LT are implemented as reverse GE, GT */ 12542 - reverse = (comp > 2); 12543 - if (reverse) { 12544 - comp = 4 - comp; 12545 - } 12546 - genfn = fns[comp][size]; 12547 - if (reverse) { 12548 - genfn(tcg_res, tcg_zero, tcg_op); 12549 - } else { 12550 - genfn(tcg_res, tcg_op, tcg_zero); 12551 - } 12552 - tcg_temp_free_i32(tcg_zero); 12553 12523 break; 12554 12524 } 12555 12525 case 0x4: /* CLS, CLZ */
+220 -36
target/arm/translate.c
··· 3917 3917 return 1; 3918 3918 } 3919 3919 3920 + static void gen_ceq0_i32(TCGv_i32 d, TCGv_i32 a) 3921 + { 3922 + tcg_gen_setcondi_i32(TCG_COND_EQ, d, a, 0); 3923 + tcg_gen_neg_i32(d, d); 3924 + } 3925 + 3926 + static void gen_ceq0_i64(TCGv_i64 d, TCGv_i64 a) 3927 + { 3928 + tcg_gen_setcondi_i64(TCG_COND_EQ, d, a, 0); 3929 + tcg_gen_neg_i64(d, d); 3930 + } 3931 + 3932 + static void gen_ceq0_vec(unsigned vece, TCGv_vec d, TCGv_vec a) 3933 + { 3934 + TCGv_vec zero = tcg_const_zeros_vec_matching(d); 3935 + tcg_gen_cmp_vec(TCG_COND_EQ, vece, d, a, zero); 3936 + tcg_temp_free_vec(zero); 3937 + } 3938 + 3939 + static const TCGOpcode vecop_list_cmp[] = { 3940 + INDEX_op_cmp_vec, 0 3941 + }; 3942 + 3943 + const GVecGen2 ceq0_op[4] = { 3944 + { .fno = gen_helper_gvec_ceq0_b, 3945 + .fniv = gen_ceq0_vec, 3946 + .opt_opc = vecop_list_cmp, 3947 + .vece = MO_8 }, 3948 + { .fno = gen_helper_gvec_ceq0_h, 3949 + .fniv = gen_ceq0_vec, 3950 + .opt_opc = vecop_list_cmp, 3951 + .vece = MO_16 }, 3952 + { .fni4 = gen_ceq0_i32, 3953 + .fniv = gen_ceq0_vec, 3954 + .opt_opc = vecop_list_cmp, 3955 + .vece = MO_32 }, 3956 + { .fni8 = gen_ceq0_i64, 3957 + .fniv = gen_ceq0_vec, 3958 + .opt_opc = vecop_list_cmp, 3959 + .prefer_i64 = TCG_TARGET_REG_BITS == 64, 3960 + .vece = MO_64 }, 3961 + }; 3962 + 3963 + static void gen_cle0_i32(TCGv_i32 d, TCGv_i32 a) 3964 + { 3965 + tcg_gen_setcondi_i32(TCG_COND_LE, d, a, 0); 3966 + tcg_gen_neg_i32(d, d); 3967 + } 3968 + 3969 + static void gen_cle0_i64(TCGv_i64 d, TCGv_i64 a) 3970 + { 3971 + tcg_gen_setcondi_i64(TCG_COND_LE, d, a, 0); 3972 + tcg_gen_neg_i64(d, d); 3973 + } 3974 + 3975 + static void gen_cle0_vec(unsigned vece, TCGv_vec d, TCGv_vec a) 3976 + { 3977 + TCGv_vec zero = tcg_const_zeros_vec_matching(d); 3978 + tcg_gen_cmp_vec(TCG_COND_LE, vece, d, a, zero); 3979 + tcg_temp_free_vec(zero); 3980 + } 3981 + 3982 + const GVecGen2 cle0_op[4] = { 3983 + { .fno = gen_helper_gvec_cle0_b, 3984 + .fniv = gen_cle0_vec, 3985 + .opt_opc = vecop_list_cmp, 3986 + .vece = MO_8 }, 3987 + { .fno = gen_helper_gvec_cle0_h, 3988 + .fniv = gen_cle0_vec, 3989 + .opt_opc = vecop_list_cmp, 3990 + .vece = MO_16 }, 3991 + { .fni4 = gen_cle0_i32, 3992 + .fniv = gen_cle0_vec, 3993 + .opt_opc = vecop_list_cmp, 3994 + .vece = MO_32 }, 3995 + { .fni8 = gen_cle0_i64, 3996 + .fniv = gen_cle0_vec, 3997 + .opt_opc = vecop_list_cmp, 3998 + .prefer_i64 = TCG_TARGET_REG_BITS == 64, 3999 + .vece = MO_64 }, 4000 + }; 4001 + 4002 + static void gen_cge0_i32(TCGv_i32 d, TCGv_i32 a) 4003 + { 4004 + tcg_gen_setcondi_i32(TCG_COND_GE, d, a, 0); 4005 + tcg_gen_neg_i32(d, d); 4006 + } 4007 + 4008 + static void gen_cge0_i64(TCGv_i64 d, TCGv_i64 a) 4009 + { 4010 + tcg_gen_setcondi_i64(TCG_COND_GE, d, a, 0); 4011 + tcg_gen_neg_i64(d, d); 4012 + } 4013 + 4014 + static void gen_cge0_vec(unsigned vece, TCGv_vec d, TCGv_vec a) 4015 + { 4016 + TCGv_vec zero = tcg_const_zeros_vec_matching(d); 4017 + tcg_gen_cmp_vec(TCG_COND_GE, vece, d, a, zero); 4018 + tcg_temp_free_vec(zero); 4019 + } 4020 + 4021 + const GVecGen2 cge0_op[4] = { 4022 + { .fno = gen_helper_gvec_cge0_b, 4023 + .fniv = gen_cge0_vec, 4024 + .opt_opc = vecop_list_cmp, 4025 + .vece = MO_8 }, 4026 + { .fno = gen_helper_gvec_cge0_h, 4027 + .fniv = gen_cge0_vec, 4028 + .opt_opc = vecop_list_cmp, 4029 + .vece = MO_16 }, 4030 + { .fni4 = gen_cge0_i32, 4031 + .fniv = gen_cge0_vec, 4032 + .opt_opc = vecop_list_cmp, 4033 + .vece = MO_32 }, 4034 + { .fni8 = gen_cge0_i64, 4035 + .fniv = gen_cge0_vec, 4036 + .opt_opc = vecop_list_cmp, 4037 + .prefer_i64 = TCG_TARGET_REG_BITS == 64, 4038 + .vece = MO_64 }, 4039 + }; 4040 + 4041 + static void gen_clt0_i32(TCGv_i32 d, TCGv_i32 a) 4042 + { 4043 + tcg_gen_setcondi_i32(TCG_COND_LT, d, a, 0); 4044 + tcg_gen_neg_i32(d, d); 4045 + } 4046 + 4047 + static void gen_clt0_i64(TCGv_i64 d, TCGv_i64 a) 4048 + { 4049 + tcg_gen_setcondi_i64(TCG_COND_LT, d, a, 0); 4050 + tcg_gen_neg_i64(d, d); 4051 + } 4052 + 4053 + static void gen_clt0_vec(unsigned vece, TCGv_vec d, TCGv_vec a) 4054 + { 4055 + TCGv_vec zero = tcg_const_zeros_vec_matching(d); 4056 + tcg_gen_cmp_vec(TCG_COND_LT, vece, d, a, zero); 4057 + tcg_temp_free_vec(zero); 4058 + } 4059 + 4060 + const GVecGen2 clt0_op[4] = { 4061 + { .fno = gen_helper_gvec_clt0_b, 4062 + .fniv = gen_clt0_vec, 4063 + .opt_opc = vecop_list_cmp, 4064 + .vece = MO_8 }, 4065 + { .fno = gen_helper_gvec_clt0_h, 4066 + .fniv = gen_clt0_vec, 4067 + .opt_opc = vecop_list_cmp, 4068 + .vece = MO_16 }, 4069 + { .fni4 = gen_clt0_i32, 4070 + .fniv = gen_clt0_vec, 4071 + .opt_opc = vecop_list_cmp, 4072 + .vece = MO_32 }, 4073 + { .fni8 = gen_clt0_i64, 4074 + .fniv = gen_clt0_vec, 4075 + .opt_opc = vecop_list_cmp, 4076 + .prefer_i64 = TCG_TARGET_REG_BITS == 64, 4077 + .vece = MO_64 }, 4078 + }; 4079 + 4080 + static void gen_cgt0_i32(TCGv_i32 d, TCGv_i32 a) 4081 + { 4082 + tcg_gen_setcondi_i32(TCG_COND_GT, d, a, 0); 4083 + tcg_gen_neg_i32(d, d); 4084 + } 4085 + 4086 + static void gen_cgt0_i64(TCGv_i64 d, TCGv_i64 a) 4087 + { 4088 + tcg_gen_setcondi_i64(TCG_COND_GT, d, a, 0); 4089 + tcg_gen_neg_i64(d, d); 4090 + } 4091 + 4092 + static void gen_cgt0_vec(unsigned vece, TCGv_vec d, TCGv_vec a) 4093 + { 4094 + TCGv_vec zero = tcg_const_zeros_vec_matching(d); 4095 + tcg_gen_cmp_vec(TCG_COND_GT, vece, d, a, zero); 4096 + tcg_temp_free_vec(zero); 4097 + } 4098 + 4099 + const GVecGen2 cgt0_op[4] = { 4100 + { .fno = gen_helper_gvec_cgt0_b, 4101 + .fniv = gen_cgt0_vec, 4102 + .opt_opc = vecop_list_cmp, 4103 + .vece = MO_8 }, 4104 + { .fno = gen_helper_gvec_cgt0_h, 4105 + .fniv = gen_cgt0_vec, 4106 + .opt_opc = vecop_list_cmp, 4107 + .vece = MO_16 }, 4108 + { .fni4 = gen_cgt0_i32, 4109 + .fniv = gen_cgt0_vec, 4110 + .opt_opc = vecop_list_cmp, 4111 + .vece = MO_32 }, 4112 + { .fni8 = gen_cgt0_i64, 4113 + .fniv = gen_cgt0_vec, 4114 + .opt_opc = vecop_list_cmp, 4115 + .prefer_i64 = TCG_TARGET_REG_BITS == 64, 4116 + .vece = MO_64 }, 4117 + }; 4118 + 3920 4119 static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) 3921 4120 { 3922 4121 tcg_gen_vec_sar8i_i64(a, a, shift); ··· 6481 6680 tcg_gen_gvec_abs(size, rd_ofs, rm_ofs, vec_size, vec_size); 6482 6681 break; 6483 6682 6683 + case NEON_2RM_VCEQ0: 6684 + tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size, 6685 + vec_size, &ceq0_op[size]); 6686 + break; 6687 + case NEON_2RM_VCGT0: 6688 + tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size, 6689 + vec_size, &cgt0_op[size]); 6690 + break; 6691 + case NEON_2RM_VCLE0: 6692 + tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size, 6693 + vec_size, &cle0_op[size]); 6694 + break; 6695 + case NEON_2RM_VCGE0: 6696 + tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size, 6697 + vec_size, &cge0_op[size]); 6698 + break; 6699 + case NEON_2RM_VCLT0: 6700 + tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size, 6701 + vec_size, &clt0_op[size]); 6702 + break; 6703 + 6484 6704 default: 6485 6705 elementwise: 6486 6706 for (pass = 0; pass < (q ? 4 : 2); pass++) { ··· 6542 6762 break; 6543 6763 default: abort(); 6544 6764 } 6545 - break; 6546 - case NEON_2RM_VCGT0: case NEON_2RM_VCLE0: 6547 - tmp2 = tcg_const_i32(0); 6548 - switch(size) { 6549 - case 0: gen_helper_neon_cgt_s8(tmp, tmp, tmp2); break; 6550 - case 1: gen_helper_neon_cgt_s16(tmp, tmp, tmp2); break; 6551 - case 2: gen_helper_neon_cgt_s32(tmp, tmp, tmp2); break; 6552 - default: abort(); 6553 - } 6554 - tcg_temp_free_i32(tmp2); 6555 - if (op == NEON_2RM_VCLE0) { 6556 - tcg_gen_not_i32(tmp, tmp); 6557 - } 6558 - break; 6559 - case NEON_2RM_VCGE0: case NEON_2RM_VCLT0: 6560 - tmp2 = tcg_const_i32(0); 6561 - switch(size) { 6562 - case 0: gen_helper_neon_cge_s8(tmp, tmp, tmp2); break; 6563 - case 1: gen_helper_neon_cge_s16(tmp, tmp, tmp2); break; 6564 - case 2: gen_helper_neon_cge_s32(tmp, tmp, tmp2); break; 6565 - default: abort(); 6566 - } 6567 - tcg_temp_free_i32(tmp2); 6568 - if (op == NEON_2RM_VCLT0) { 6569 - tcg_gen_not_i32(tmp, tmp); 6570 - } 6571 - break; 6572 - case NEON_2RM_VCEQ0: 6573 - tmp2 = tcg_const_i32(0); 6574 - switch(size) { 6575 - case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break; 6576 - case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break; 6577 - case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break; 6578 - default: abort(); 6579 - } 6580 - tcg_temp_free_i32(tmp2); 6581 6765 break; 6582 6766 case NEON_2RM_VCGT0_F: 6583 6767 {
+5
target/arm/translate.h
··· 275 275 uint64_t vfp_expand_imm(int size, uint8_t imm8); 276 276 277 277 /* Vector operations shared between ARM and AArch64. */ 278 + extern const GVecGen2 ceq0_op[4]; 279 + extern const GVecGen2 clt0_op[4]; 280 + extern const GVecGen2 cgt0_op[4]; 281 + extern const GVecGen2 cle0_op[4]; 282 + extern const GVecGen2 cge0_op[4]; 278 283 extern const GVecGen3 mla_op[4]; 279 284 extern const GVecGen3 mls_op[4]; 280 285 extern const GVecGen3 cmtst_op[4];
+25
target/arm/vec_helper.c
··· 1257 1257 } 1258 1258 } 1259 1259 #endif 1260 + 1261 + #define DO_CMP0(NAME, TYPE, OP) \ 1262 + void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \ 1263 + { \ 1264 + intptr_t i, opr_sz = simd_oprsz(desc); \ 1265 + for (i = 0; i < opr_sz; i += sizeof(TYPE)) { \ 1266 + TYPE nn = *(TYPE *)(vn + i); \ 1267 + *(TYPE *)(vd + i) = -(nn OP 0); \ 1268 + } \ 1269 + clear_tail(vd, opr_sz, simd_maxsz(desc)); \ 1270 + } 1271 + 1272 + DO_CMP0(gvec_ceq0_b, int8_t, ==) 1273 + DO_CMP0(gvec_clt0_b, int8_t, <) 1274 + DO_CMP0(gvec_cle0_b, int8_t, <=) 1275 + DO_CMP0(gvec_cgt0_b, int8_t, >) 1276 + DO_CMP0(gvec_cge0_b, int8_t, >=) 1277 + 1278 + DO_CMP0(gvec_ceq0_h, int16_t, ==) 1279 + DO_CMP0(gvec_clt0_h, int16_t, <) 1280 + DO_CMP0(gvec_cle0_h, int16_t, <=) 1281 + DO_CMP0(gvec_cgt0_h, int16_t, >) 1282 + DO_CMP0(gvec_cge0_h, int16_t, >=) 1283 + 1284 + #undef DO_CMP0
+1
tests/Makefile.include
··· 439 439 hw/core/fw-path-provider.o \ 440 440 hw/core/reset.o \ 441 441 hw/core/vmstate-if.o \ 442 + hw/core/clock.o hw/core/qdev-clock.o \ 442 443 $(test-qapi-obj-y) 443 444 tests/test-vmstate$(EXESUF): tests/test-vmstate.o \ 444 445 migration/vmstate.o migration/vmstate-types.o migration/qemu-file.o \
+10 -5
tests/acceptance/boot_linux_console.py
··· 336 336 """ 337 337 uboot_url = ('https://raw.githubusercontent.com/' 338 338 'Subbaraya-Sundeep/qemu-test-binaries/' 339 - 'fa030bd77a014a0b8e360d3b7011df89283a2f0b/u-boot') 340 - uboot_hash = 'abba5d9c24cdd2d49cdc2a8aa92976cf20737eff' 339 + 'fe371d32e50ca682391e1e70ab98c2942aeffb01/u-boot') 340 + uboot_hash = 'cbb8cbab970f594bf6523b9855be209c08374ae2' 341 341 uboot_path = self.fetch_asset(uboot_url, asset_hash=uboot_hash) 342 342 spi_url = ('https://raw.githubusercontent.com/' 343 343 'Subbaraya-Sundeep/qemu-test-binaries/' 344 - 'fa030bd77a014a0b8e360d3b7011df89283a2f0b/spi.bin') 345 - spi_hash = '85f698329d38de63aea6e884a86fbde70890a78a' 344 + 'fe371d32e50ca682391e1e70ab98c2942aeffb01/spi.bin') 345 + spi_hash = '65523a1835949b6f4553be96dec1b6a38fb05501' 346 346 spi_path = self.fetch_asset(spi_url, asset_hash=spi_hash) 347 347 348 348 self.vm.set_console() ··· 352 352 '-drive', 'file=' + spi_path + ',if=mtd,format=raw', 353 353 '-no-reboot') 354 354 self.vm.launch() 355 - self.wait_for_console_pattern('init started: BusyBox') 355 + self.wait_for_console_pattern('Enter \'help\' for a list') 356 + 357 + exec_command_and_wait_for_pattern(self, 'ifconfig eth0 10.0.2.15', 358 + 'eth0: link becomes ready') 359 + exec_command_and_wait_for_pattern(self, 'ping -c 3 10.0.2.2', 360 + '3 packets transmitted, 3 packets received, 0% packet loss') 356 361 357 362 def do_test_arm_raspi2(self, uart_id): 358 363 """