qemu with hax to log dma reads & writes jcs.org/2018/11/12/vfio

oslib-posix: add helpers for stack alloc and free

the allocated stack will be adjusted to the minimum supported stack size
by the OS and rounded up to be a multiple of the system pagesize.
Additionally an architecture dependent guard page is added to the stack
to catch stack overflows.

Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

authored by

Peter Lieven and committed by
Kevin Wolf
8737d9e0 74e1ae7c

+69
+27
include/sysemu/os-posix.h
··· 60 60 61 61 bool is_daemonized(void); 62 62 63 + /** 64 + * qemu_alloc_stack: 65 + * @sz: pointer to a size_t holding the requested usable stack size 66 + * 67 + * Allocate memory that can be used as a stack, for instance for 68 + * coroutines. If the memory cannot be allocated, this function 69 + * will abort (like g_malloc()). This function also inserts an 70 + * additional guard page to catch a potential stack overflow. 71 + * Note that the memory required for the guard page and alignment 72 + * and minimal stack size restrictions will increase the value of sz. 73 + * 74 + * The allocated stack must be freed with qemu_free_stack(). 75 + * 76 + * Returns: pointer to (the lowest address of) the stack memory. 77 + */ 78 + void *qemu_alloc_stack(size_t *sz); 79 + 80 + /** 81 + * qemu_free_stack: 82 + * @stack: stack to free 83 + * @sz: size of stack in bytes 84 + * 85 + * Free a stack allocated via qemu_alloc_stack(). Note that sz must 86 + * be exactly the adjusted stack size returned by qemu_alloc_stack. 87 + */ 88 + void qemu_free_stack(void *stack, size_t sz); 89 + 63 90 #endif
+42
util/oslib-posix.c
··· 499 499 } 500 500 return pid; 501 501 } 502 + 503 + void *qemu_alloc_stack(size_t *sz) 504 + { 505 + void *ptr, *guardpage; 506 + size_t pagesz = getpagesize(); 507 + #ifdef _SC_THREAD_STACK_MIN 508 + /* avoid stacks smaller than _SC_THREAD_STACK_MIN */ 509 + long min_stack_sz = sysconf(_SC_THREAD_STACK_MIN); 510 + *sz = MAX(MAX(min_stack_sz, 0), *sz); 511 + #endif 512 + /* adjust stack size to a multiple of the page size */ 513 + *sz = ROUND_UP(*sz, pagesz); 514 + /* allocate one extra page for the guard page */ 515 + *sz += pagesz; 516 + 517 + ptr = mmap(NULL, *sz, PROT_READ | PROT_WRITE, 518 + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); 519 + if (ptr == MAP_FAILED) { 520 + abort(); 521 + } 522 + 523 + #if defined(HOST_IA64) 524 + /* separate register stack */ 525 + guardpage = ptr + (((*sz - pagesz) / 2) & ~pagesz); 526 + #elif defined(HOST_HPPA) 527 + /* stack grows up */ 528 + guardpage = ptr + *sz - pagesz; 529 + #else 530 + /* stack grows down */ 531 + guardpage = ptr; 532 + #endif 533 + if (mprotect(guardpage, pagesz, PROT_NONE) != 0) { 534 + abort(); 535 + } 536 + 537 + return ptr; 538 + } 539 + 540 + void qemu_free_stack(void *stack, size_t sz) 541 + { 542 + munmap(stack, sz); 543 + }