aboutsummaryrefslogtreecommitdiff
path: root/hw/serial.c
diff options
context:
space:
mode:
authorJason Wessel <jason.wessel@windriver.com>2009-05-18 10:00:27 -0500
committerAnthony Liguori <aliguori@us.ibm.com>2009-05-22 10:50:35 -0500
commit40ff16248e5a7a699386ed8b7ef462af9b8af3fa (patch)
treeb88670c117c4b451366bb03fbe2229b05b56775f /hw/serial.c
parent7e57f0493a661e57c5a2572a8818d35267482922 (diff)
serial: fix lost character after sysrq
After creating an automated regression test to test the sysrq responses while running a linux image in qemu, I found that the simulated uart was eating the character right after the sysrq about 75% of the time. The problem is that the qemu sets the LSR_DR (data ready) bit on a serial break. The automated tests can send a break and the sysrq character quickly enough that the qemu serial fifo has a real character available. When there is valid character in the fifo, it gets consumed by the serial driver in the guest OS. The real hardware also appears to set the LSR_DR but always appears to have a null byte in this condition. This patch changes the qemu behavior to match the tested characteristics of a real 16550 chip. Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Diffstat (limited to 'hw/serial.c')
-rw-r--r--hw/serial.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/hw/serial.c b/hw/serial.c
index a82c29cf9c..71f545d86c 100644
--- a/hw/serial.c
+++ b/hw/serial.c
@@ -586,6 +586,8 @@ static int serial_can_receive(SerialState *s)
static void serial_receive_break(SerialState *s)
{
s->rbr = 0;
+ /* When the LSR_DR is set a null byte is pushed into the fifo */
+ fifo_put(s, RECV_FIFO, '\0');
s->lsr |= UART_LSR_BI | UART_LSR_DR;
serial_update_irq(s);
}