aboutsummaryrefslogtreecommitdiff
path: root/include/exec/ram_addr.h
AgeCommit message (Collapse)Author
2015-01-08exec: qemu_ram_alloc_resizeable, qemu_ram_resizeMichael S. Tsirkin
Add API to allocate "resizeable" RAM. This looks just like regular RAM generally, but has a special property that only a portion of it (used_length) is actually used, and migrated. This used_length size can change across reboots. Follow up patches will change used_length for such blocks at migration, making it easier to extend devices using such RAM (notably ACPI, but in the future thinkably other ROMs) without breaking migration compatibility or wasting ROM (guest) memory. Device is notified on resize, so it can adjust if necessary. qemu_ram_alloc_resizeable allocates this memory, qemu_ram_resize resizes it. Note: nothing prevents making all RAM resizeable in this way. However, reviewers felt that only enabling this selectively will make some class of errors easier to detect. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2015-01-08exec: cpu_physical_memory_set/clear_dirty_rangeMichael S. Tsirkin
Make cpu_physical_memory_set/clear_dirty_range behave symmetrically. To clear range for a given client type only, add cpu_physical_memory_clear_dirty_range_type. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-18exec: Handle multipage ranges in invalidate_and_set_dirty()Peter Maydell
The code in invalidate_and_set_dirty() needs to handle addr/length combinations which cross guest physical page boundaries. This can happen, for example, when disk I/O reads large blocks into guest RAM which previously held code that we have cached translations for. Unfortunately we were only checking the clean/dirty status of the first page in the range, and then were calling a tb_invalidate function which only handles ranges that don't cross page boundaries. Fix the function to deal with multipage ranges. The symptoms of this bug were that guest code would misbehave (eg segfault), in particular after a guest reboot but potentially any time the guest reused a page of its physical RAM for new code. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1416167061-13203-1-git-send-email-peter.maydell@linaro.org
2014-09-09exec: add parameter errp to qemu_ram_alloc and qemu_ram_alloc_from_ptrHu Tao
Add parameter errp to qemu_ram_alloc and qemu_ram_alloc_from_ptr so that we can handle errors. Signed-off-by: Hu Tao <hutao@cn.fujitsu.com> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> [Assert ptr != NULL in memory_region_init_ram_ptr. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-22exec: fix migration with devices that use address_space_rwPaolo Bonzini
Devices that use address_space_rw to write large areas to memory (as opposed to address_space_map/unmap) were broken with respect to migration since fe680d0 (exec: Limit translation limiting in address_space_translate to xen, 2014-05-07). Such devices include IDE CD-ROMs. The reason is that invalidate_and_set_dirty (called by address_space_rw but not address_space_map/unmap) was only setting the dirty bit for the first page in the translation. To fix this, introduce cpu_physical_memory_set_dirty_range_nocode that is the same as cpu_physical_memory_set_dirty_range except it does not muck with the DIRTY_MEMORY_CODE bitmap. This function can be used if the caller invalidates translations with tb_invalidate_phys_page_range. There is another difference between cpu_physical_memory_set_dirty_range and cpu_physical_memory_set_dirty_flag; the former includes a call to xen_modified_memory. This is handled separately in invalidate_and_set_dirty, and is not needed in other callers of cpu_physical_memory_set_dirty_range_nocode, so leave it alone. Just one nit: now that invalidate_and_set_dirty takes care of handling multiple pages, there is no need for address_space_unmap to wrap it in a loop. In fact that loop would now be O(n^2). Reported-by: Dave Gilbert <dgilbert@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-29vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE messageDamjan Marion
Old code was affected by memory gaps which resulted in buffer pointers pointing to address outside of the mapped regions. Here we are introducing following changes: - new function qemu_get_ram_block_host_ptr() returns host pointer to the ram block, it is needed to calculate offset of specific region in the host memory - new field mmap_offset is added to the VhostUserMemoryRegion. It contains offset where specific region starts in the mapped memory. As there is stil no wider adoption of vhost-user agreement was made that we will not bump version number due to this change - other fileds in VhostUserMemoryRegion struct are not changed, as they are all needed for usermode app implementation - region data is not taken from ram_list.blocks anymore, instead we use region data which is alredy calculated for use in vhost-net - Now multiple regions can have same FD and user applicaton can call mmap() multiple times with the same FD but with different offset (user needs to take care for offset page alignment) Signed-off-by: Damjan Marion <damarion@cisco.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Damjan Marion <damarion@cisco.com>
2014-06-19hostmem: add property to map memory with MAP_SHAREDPaolo Bonzini
A new "share" property can be used with the "memory-file" backend to map memory with MAP_SHARED instead of MAP_PRIVATE. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Hu Tao <hutao@cn.fujitsu.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2014-06-19hostmem: allow preallocation of any memory regionPaolo Bonzini
And allow preallocation of file-based memory even without -mem-prealloc. Some care is necessary because -mem-prealloc does not allow disabling preallocation for hostmem-file. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Hu Tao <hutao@cn.fujitsu.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2014-06-19memory: add error propagation to file-based RAM allocationPaolo Bonzini
Right now, -mem-path will fall back to RAM-based allocation in some cases. This should never happen with "-object memory-file", prepare the code by adding correct error propagation. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Hu Tao <hutao@cn.fujitsu.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> MST: drop \n at end of error messages
2014-06-19memory: reorganize file-based allocationPaolo Bonzini
Split the internal interface in exec.c to a separate function, and push the check on mem_path up to memory_region_init_ram. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Hu Tao <hutao@cn.fujitsu.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2014-06-11exec: replace ffsl with ctzlNatanael Copa
See commit fbeadf50 (bitops: unify bitops_ffsl with the one in host-utils.h, call it bitops_ctzl) on why ctzl should be used instead of ffsl. This is also needed for musl libc which does not implement ffsl. Signed-off-by: Natanael Copa <ncopa@alpinelinux.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-02-04exec: fix ram_list dirty map optimizationAlexey Kardashevskiy
The ae2810c4bb3b383176e8e1b33931b16c01483aab patch introduced optimization for ram_list.dirty_memory update. However it can only work correctly if hpratio is 1 as the @bitmap parameter stores 1 bits per system page size (may vary, 4K or 64K on PPC64) and ram_list.dirty_memory stores 1 bit per TARGET_PAGE_SIZE (which is hardcoded to 4K). This fixes hpratio!=1 case to fall back to the slow path. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
2014-01-15exec: Exclude non portable function for MinGWStefan Weil
cpu_physical_memory_set_dirty_lebitmap calls getpageaddr and ffsl which are unavailable for MinGW. As the function is unused for MinGW, it can simply be excluded from compilation. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-01-13memory: syncronize kvm bitmap using bitmaps operationsJuan Quintela
If bitmaps are aligned properly, use bitmap operations. If they are not, just use old bit at a time code. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: move bitmap synchronization to its own functionJuan Quintela
We want to have all the functions that handle directly the dirty bitmap near. We will change it later. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2014-01-13memory: split cpu_physical_memory_* functions to its own includeJuan Quintela
All the functions that use ram_addr_t should be here. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>