aboutsummaryrefslogtreecommitdiff
path: root/trace-events
diff options
context:
space:
mode:
authorPeter Portante <peter.portante@redhat.com>2012-04-05 11:00:45 -0400
committerAnthony Liguori <aliguori@us.ibm.com>2012-04-16 12:56:48 -0500
commit158fd3ce98afd21f2e2639600f6414ea703a9121 (patch)
tree9f889a97ce1bac7f9a65d4d2d18abdb81ea8c0ca /trace-events
parentfc34e77bb3dd41a9932ec5830c06bcade1f5a08e (diff)
qemu-timer.c: Remove 250us timeouts
Basically, the main wait loop calls qemu_run_all_timers() unconditionally. The first thing this routine used to do is to see if a timer had been serviced, and then reset the loop timeout to the next deadline. However, the new deadlines had not been calculated at that point, as qemu_run_timers() had not been called yet for each of the clocks. So qemu_rearm_alarm_timer() would end up with a negative or zero deadline, and default to setting a 250us timeout for the loop. As qemu_run_timers() is called for each clock, the real deadlines would be put in place, but because a loop timeout was already set, the loop timeout would not be changed. Once that 250us timeout fired, the real deadline would be used for the subsequent timeout. For idle VMs, this effectively doubles the number of times through the loop, doubling the number of select() system calls, timer calls, etc. putting added scheduling pressure on the kernel. And under cgroups, this really causes a big problem because the cgroup code does not scale well. By simply running the timers before trying to rearm the timer, we always rearm with a non-zero deadline, effectively halving the number of system calls. Signed-off-by: Peter Portante <pportant@redhat.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Diffstat (limited to 'trace-events')
0 files changed, 0 insertions, 0 deletions