Age | Commit message (Collapse) | Author |
|
When initializing a QPCIBus, track which QTestState the bus is
associated with (so that a later patch can then explicitly use
that test state for all communication on the bus, rather than
blindly relying on global_qtest). Update the initialization
functions to take another parameter, and update all callers to
pass in state (for now, most callers get away with passing the
current global_qtest as the current state, although this required
fixing the order of initialization to ensure qtest_start() is
called before qpci_init*() in rtl8139-test, and provided an
opportunity to pass in the allocator in e1000e-test).
Touch up some allocations to use g_new0() rather than g_malloc()
while in the area, and simplify some code (all implementations
of QOSOps provide a .init_allocator() that never fails).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
[thuth: Removed hunk from vhost-user-test.c that is not required anymore,
fixed conflict in qtest_vboot() and adjusted qpci_init_pc() in sdhci-test]
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
The usual use model for the libqos PCI functions is to map a specific PCI
BAR using qpci_iomap() then pass the returned token into IO accessor
functions. This, and the fact that iomap() returns a (void *) which
actually contains a PCI space address, kind of suggests that the return
value from iomap is supposed to be an opaque token.
..except that the callers expect to be able to add offsets to it. Which
also assumes the compiler will support pointer arithmetic on a (void *),
and treat it as working with byte offsets.
To clarify this situation change iomap() and the IO accessors to take
a definitely opaque BAR handle (enforced with a wrapper struct) along with
an offset within the BAR. This changes both the functions and all the
callers.
There were a number of places that checked if iomap() returned non-NULL,
and or initialized it to NULL before hand. Since iomap() already assert()s
if it fails to map the BAR, these tests were mostly pointless and are
removed.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
|
|
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Remove glib.h includes, as it is provided by osdep.h.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
|
|
Originally, timers were ticks based, and it made sense to
add ticks to current time to know when to trigger an alarm.
But since commit:
7447545 change all other clock references to use nanosecond resolution accessors
All timers use nanoseconds and we need to convert ticks to nanoseconds, by
doing something like:
y = muldiv64(x, get_ticks_per_sec(), PCI_FREQUENCY)
where x is the number of device ticks and y the number of system ticks.
y is used as nanoseconds in timer functions,
it works because 1 tick is 1 nanosecond.
(get_ticks_per_sec() is 10^9)
But as PCI frequency is 33 MHz, we can also do:
y = x * 30; /* 33 MHz PCI period is 30 ns */
Which is much more simple.
This implies a 33.333333 MHz PCI frequency,
but this is correct.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Commit e0cf11f31c24cfb17f44ed46c254d84c78e7f6e9 ("timer: Use a single
definition of NSEC_PER_SEC for the whole codebase") renamed
NANOSECONDS_PER_SECOND to NSEC_PER_SEC.
On Mac OS X there is a <dispatch/time.h> system header which also
defines NSEC_PER_SEC. This causes compiler warnings.
Let's use the old name instead. It's longer but it doesn't clash.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1436364609-7929-1-git-send-email-stefanha@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: c6e55468856ba0b8f95913c4da111cc0ef266541.1434113783.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Test behaviour of timers and interrupts related to timeouts.
Signed-off-by: Frediano Ziglio <freddy77@gmail.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1420742303-3030-1-git-send-email-freddy77@gmail.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|