Age | Commit message (Collapse) | Author |
|
The map cache is a Xen thing, so its API should make this clear.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Replace xen_map_block with qemu_map_cache with the appropriate locking
and size parameters.
Replace xen_unmap_block with qemu_invalidate_entry.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
There is no need for qemu_map_cache_unlock, just use
qemu_invalidate_entry instead.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Fix the implementation of qemu_map_cache: correctly support size
arguments different from 0 or MCACHE_BUCKET_SIZE.
The new implementation supports locked mapcache entries with size
multiple of MCACHE_BUCKET_SIZE. qemu_invalidate_entry can correctly
find and unmap these "large" mapcache entries given that the virtual
address passed to qemu_invalidate_entry is the same returned by
qemu_map_cache when the locked mapcache entry was created.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
a call to qemu_put_ram_ptr, the pointer may be unmap from QEMU when
used with Xen.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Adds a cap to the number of map cache entries. This prevents the map
cache from overwhelming system memory.
I also removed the bitmap macros and #included bitmap.h instead.
Signed-off-By: John Baboval <john.baboval@virtualcomputer.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
On IA32 host or IA32 PAE host, at present, generally, we can't create
an HVM guest with more than 2G memory, because generally it's almost
impossible for Qemu to find a large enough and consecutive virtual
address space to map an HVM guest's whole physical address space.
The attached patch fixes this issue using dynamic mapping based on
little blocks of memory.
Each call to qemu_get_ram_ptr makes a call to qemu_map_cache with the
lock option, so mapcache will not unmap these ram_ptr.
Blocks that do not belong to the RAM, but usually to a device ROM or to
a framebuffer, are handled in a separate function. So the whole RAMBlock
can be map.
Signed-off-by: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|