aboutsummaryrefslogtreecommitdiff
path: root/docs/devel/testing
diff options
context:
space:
mode:
authorThomas Huth <thuth@redhat.com>2024-08-30 15:38:35 +0200
committerThomas Huth <thuth@redhat.com>2024-09-04 12:28:00 +0200
commitff41da503089c6da3f7c4ffe534540ba8de81727 (patch)
tree40c0c6b4e23600034f89bcf13595439c9736e941 /docs/devel/testing
parent6d62722ebd21d39a163b568f16068199181e5c62 (diff)
docs/devel: Split testing docs from the build docs and move to separate folder
Building and testing are two separate topics, so let's split the testing into a separate category and move the related files into a separate folder. Message-ID: <20240830133841.142644-42-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
Diffstat (limited to 'docs/devel/testing')
-rw-r--r--docs/devel/testing/acpi-bits.rst155
-rw-r--r--docs/devel/testing/ci-definitions.rst.inc121
-rw-r--r--docs/devel/testing/ci-jobs.rst.inc190
-rw-r--r--docs/devel/testing/ci-runners.rst.inc116
-rw-r--r--docs/devel/testing/ci.rst14
-rw-r--r--docs/devel/testing/fuzzing.rst304
-rw-r--r--docs/devel/testing/index.rst14
-rw-r--r--docs/devel/testing/main.rst1557
-rw-r--r--docs/devel/testing/qgraph.rst628
-rw-r--r--docs/devel/testing/qtest.rst91
10 files changed, 3190 insertions, 0 deletions
diff --git a/docs/devel/testing/acpi-bits.rst b/docs/devel/testing/acpi-bits.rst
new file mode 100644
index 0000000000..78aeb6aa3c
--- /dev/null
+++ b/docs/devel/testing/acpi-bits.rst
@@ -0,0 +1,155 @@
+==================================
+ACPI/SMBIOS testing using biosbits
+==================================
+************
+Introduction
+************
+Biosbits is a software written by Josh Triplett that can be downloaded
+from https://biosbits.org/. The github codebase can be found
+`here <https://github.com/biosbits/bits/tree/master>`__. It is a software that
+executes the bios components such as acpi and smbios tables directly through
+acpica bios interpreter (a freely available C based library written by Intel,
+downloadable from https://acpica.org/ and is included with biosbits) without an
+operating system getting involved in between. Bios-bits has python integration
+with grub so actual routines that executes bios components can be written in
+python instead of bash-ish (grub's native scripting language).
+There are several advantages to directly testing the bios in a real physical
+machine or in a VM as opposed to indirectly discovering bios issues through the
+operating system (the OS). Operating systems tend to bypass bios problems and
+hide them from the end user. We have more control of what we wanted to test and
+how by being as close to the bios on a running system as possible without a
+complicated software component such as an operating system coming in between.
+Another issue is that we cannot exercise bios components such as ACPI and
+SMBIOS without being in the highest hardware privilege level, ring 0 for
+example in case of x86. Since the OS executes from ring 0 whereas normal user
+land software resides in unprivileged ring 3, operating system must be modified
+in order to write our test routines that exercise and test the bios. This is
+not possible in all cases. Lastly, test frameworks and routines are preferably
+written using a high level scripting language such as python. OSes and
+OS modules are generally written using low level languages such as C and
+low level assembly machine language. Writing test routines in a low level
+language makes things more cumbersome. These and other reasons makes using
+bios-bits very attractive for testing bioses. More details on the inspiration
+for developing biosbits and its real life uses can be found in [#a]_ and [#b]_.
+
+For QEMU, we maintain a fork of bios bits in gitlab along with all the
+dependent submodules `here <https://gitlab.com/qemu-project/biosbits-bits>`__.
+This fork contains numerous fixes, a newer acpica and changes specific to
+running these functional QEMU tests using bits. The author of this document
+is the sole maintainer of the QEMU fork of bios bits repository. For more
+information, please see author's `FOSDEM talk on this bios-bits based test
+framework <https://fosdem.org/2024/schedule/event/fosdem-2024-2262-exercising-qemu-generated-acpi-smbios-tables-using-biosbits-from-within-a-guest-vm-/>`__.
+
+*********************************
+Description of the test framework
+*********************************
+
+Under the directory ``tests/functional/``, ``test_acpi_bits.py`` is a QEMU
+functional test that drives all this.
+
+A brief description of the various test files follows.
+
+Under ``tests/functional/`` as the root we have:
+
+::
+
+ ├── acpi-bits
+ │ ├── bits-config
+ │ │ └── bits-cfg.txt
+ │ ├── bits-tests
+ │ ├── smbios.py2
+ │ ├── testacpi.py2
+ │ └── testcpuid.py2
+ ├── test_acpi_bits.py
+
+* ``tests/functional``:
+
+ ``test_acpi_bits.py``:
+ This is the main python functional test script that generates a
+ biosbits iso. It then spawns a QEMU VM with it, collects the log and reports
+ test failures. This is the script one would be interested in if they wanted
+ to add or change some component of the log parsing, add a new command line
+ to alter how QEMU is spawned etc. Test writers typically would not need to
+ modify this script unless they wanted to enhance or change the log parsing
+ for their tests. In order to enable debugging, you can set **V=1**
+ environment variable. This enables verbose mode for the test and also dumps
+ the entire log from bios bits and more information in case failure happens.
+ You can also set **BITS_DEBUG=1** to turn on debug mode. It will enable
+ verbose logs and also retain the temporary work directory the test used for
+ you to inspect and run the specific commands manually.
+
+ In order to run this test, please perform the following steps from the QEMU
+ build directory (assuming that the sources are in ".."):
+ ::
+
+ $ export PYTHONPATH=../python:../tests/functional
+ $ export QEMU_TEST_QEMU_BINARY=$PWD/qemu-system-x86_64
+ $ python3 ../tests/functional/test_acpi_bits.py
+
+ The above will run all acpi-bits functional tests (producing output in
+ tap format).
+
+ You can inspect the log files in tests/functional/x86_64/test_acpi_bits.*/
+ for more information about the run or in order to diagnoze issues.
+ If you pass V=1 in the environment, more diagnostic logs will be put into
+ the test log.
+
+* ``tests/functional/acpi-bits/bits-config``:
+
+ This location contains biosbits configuration files that determine how the
+ software runs the tests.
+
+ ``bits-config.txt``:
+ This is the biosbits config file that determines what tests
+ or actions are performed by bits. The description of the config options are
+ provided in the file itself.
+
+* ``tests/functional/acpi-bits/bits-tests``:
+
+ This directory contains biosbits python based tests that are run from within
+ the biosbits environment in the spawned VM. New additions of test cases can
+ be made in the appropriate test file. For example, new acpi tests can go
+ into testacpi.py2 and one would call testsuite.add_test() to register the new
+ test so that it gets executed as a part of the ACPI tests.
+ It might be occasionally necessary to disable some subtests or add a new
+ test that belongs to a test suite not already present in this directory. To
+ do this, please clone the bits source from
+ https://gitlab.com/qemu-project/biosbits-bits/-/tree/qemu-bits.
+ Note that this is the "qemu-bits" branch and not the "bits" branch of the
+ repository. "qemu-bits" is the branch where we have made all the QEMU
+ specific enhancements and we must use the source from this branch only.
+ Copy the test suite/script that needs modification (addition of new tests
+ or disabling them) from python directory into this directory. For
+ example, in order to change cpuid related tests, copy the following
+ file into this directory and rename it with .py2 extension:
+ https://gitlab.com/qemu-project/biosbits-bits/-/blob/qemu-bits/python/testcpuid.py
+ Then make your additions and changes here. Therefore, the steps are:
+
+ (a) Copy unmodified test script to this directory from bits source.
+ (b) Add a SPDX license header.
+ (c) Perform modifications to the test.
+
+ Commits (a), (b) and (c) preferably should go under separate commits so that
+ the original test script and the changes we have made are separated and
+ clear. (a) and (b) can sometimes be combined into a single step.
+
+ The test framework will then use your modified test script to run the test.
+ No further changes would be needed. Please check the logs to make sure that
+ appropriate changes have taken effect.
+
+ The tests have an extension .py2 in order to indicate that:
+
+ (a) They are python2.7 based scripts and not python 3 scripts.
+ (b) They are run from within the bios bits VM and is not subjected to QEMU
+ build/test python script maintenance and dependency resolutions.
+ (c) They need not be loaded by the test framework by accident when running
+ tests.
+
+
+Author: Ani Sinha <anisinha@redhat.com>
+
+References:
+-----------
+.. [#a] https://blog.linuxplumbersconf.org/2011/ocw/system/presentations/867/original/bits.pdf
+.. [#b] https://www.youtube.com/watch?v=36QIepyUuhg
+.. [#c] https://fosdem.org/2024/schedule/event/fosdem-2024-2262-exercising-qemu-generated-acpi-smbios-tables-using-biosbits-from-within-a-guest-vm-/
diff --git a/docs/devel/testing/ci-definitions.rst.inc b/docs/devel/testing/ci-definitions.rst.inc
new file mode 100644
index 0000000000..6d5c6fd9f2
--- /dev/null
+++ b/docs/devel/testing/ci-definitions.rst.inc
@@ -0,0 +1,121 @@
+Definition of terms
+===================
+
+This section defines the terms used in this document and correlates them with
+what is currently used on QEMU.
+
+Automated tests
+---------------
+
+An automated test is written on a test framework using its generic test
+functions/classes. The test framework can run the tests and report their
+success or failure [1]_.
+
+An automated test has essentially three parts:
+
+1. The test initialization of the parameters, where the expected parameters,
+ like inputs and expected results, are set up;
+2. The call to the code that should be tested;
+3. An assertion, comparing the result from the previous call with the expected
+ result set during the initialization of the parameters. If the result
+ matches the expected result, the test has been successful; otherwise, it has
+ failed.
+
+Unit testing
+------------
+
+A unit test is responsible for exercising individual software components as a
+unit, like interfaces, data structures, and functionality, uncovering errors
+within the boundaries of a component. The verification effort is in the
+smallest software unit and focuses on the internal processing logic and data
+structures. A test case of unit tests should be designed to uncover errors due
+to erroneous computations, incorrect comparisons, or improper control flow [2]_.
+
+On QEMU, unit testing is represented by the 'check-unit' target from 'make'.
+
+Functional testing
+------------------
+
+A functional test focuses on the functional requirement of the software.
+Deriving sets of input conditions, the functional tests should fully exercise
+all the functional requirements for a program. Functional testing is
+complementary to other testing techniques, attempting to find errors like
+incorrect or missing functions, interface errors, behavior errors, and
+initialization and termination errors [3]_.
+
+On QEMU, functional testing is represented by the 'check-qtest' target from
+'make'.
+
+System testing
+--------------
+
+System tests ensure all application elements mesh properly while the overall
+functionality and performance are achieved [4]_. Some or all system components
+are integrated to create a complete system to be tested as a whole. System
+testing ensures that components are compatible, interact correctly, and
+transfer the right data at the right time across their interfaces. As system
+testing focuses on interactions, use case-based testing is a practical approach
+to system testing [5]_. Note that, in some cases, system testing may require
+interaction with third-party software, like operating system images, databases,
+networks, and so on.
+
+On QEMU, system testing is represented by the 'check-avocado' target from
+'make'.
+
+Flaky tests
+-----------
+
+A flaky test is defined as a test that exhibits both a passing and a failing
+result with the same code on different runs. Some usual reasons for an
+intermittent/flaky test are async wait, concurrency, and test order dependency
+[6]_.
+
+Gating
+------
+
+A gate restricts the move of code from one stage to another on a
+test/deployment pipeline. The step move is granted with approval. The approval
+can be a manual intervention or a set of tests succeeding [7]_.
+
+On QEMU, the gating process happens during the pull request. The approval is
+done by the project leader running its own set of tests. The pull request gets
+merged when the tests succeed.
+
+Continuous Integration (CI)
+---------------------------
+
+Continuous integration (CI) requires the builds of the entire application and
+the execution of a comprehensive set of automated tests every time there is a
+need to commit any set of changes [8]_. The automated tests can be composed of
+the unit, functional, system, and other tests.
+
+Keynotes about continuous integration (CI) [9]_:
+
+1. System tests may depend on external software (operating system images,
+ firmware, database, network).
+2. It may take a long time to build and test. It may be impractical to build
+ the system being developed several times per day.
+3. If the development platform is different from the target platform, it may
+ not be possible to run system tests in the developer’s private workspace.
+ There may be differences in hardware, operating system, or installed
+ software. Therefore, more time is required for testing the system.
+
+References
+----------
+
+.. [1] Sommerville, Ian (2016). Software Engineering. p. 233.
+.. [2] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering,
+ A Practitioner’s Approach. p. 48, 376, 378, 381.
+.. [3] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering,
+ A Practitioner’s Approach. p. 388.
+.. [4] Pressman, Roger S. & Maxim, Bruce R. (2020). Software Engineering,
+ A Practitioner’s Approach. Software Engineering, p. 377.
+.. [5] Sommerville, Ian (2016). Software Engineering. p. 59, 232, 240.
+.. [6] Luo, Qingzhou, et al. An empirical analysis of flaky tests.
+ Proceedings of the 22nd ACM SIGSOFT International Symposium on
+ Foundations of Software Engineering. 2014.
+.. [7] Humble, Jez & Farley, David (2010). Continuous Delivery:
+ Reliable Software Releases Through Build, Test, and Deployment, p. 122.
+.. [8] Humble, Jez & Farley, David (2010). Continuous Delivery:
+ Reliable Software Releases Through Build, Test, and Deployment, p. 55.
+.. [9] Sommerville, Ian (2016). Software Engineering. p. 743.
diff --git a/docs/devel/testing/ci-jobs.rst.inc b/docs/devel/testing/ci-jobs.rst.inc
new file mode 100644
index 0000000000..3756bbe355
--- /dev/null
+++ b/docs/devel/testing/ci-jobs.rst.inc
@@ -0,0 +1,190 @@
+.. _ci_var:
+
+Custom CI/CD variables
+======================
+
+QEMU CI pipelines can be tuned by setting some CI environment variables.
+
+Set variable globally in the user's CI namespace
+------------------------------------------------
+
+Variables can be set globally in the user's CI namespace setting.
+
+For further information about how to set these variables, please refer to::
+
+ https://docs.gitlab.com/ee/ci/variables/#add-a-cicd-variable-to-a-project
+
+Set variable manually when pushing a branch or tag to the user's repository
+---------------------------------------------------------------------------
+
+Variables can be set manually when pushing a branch or tag, using
+git-push command line arguments.
+
+Example setting the QEMU_CI_EXAMPLE_VAR variable:
+
+.. code::
+
+ git push -o ci.variable="QEMU_CI_EXAMPLE_VAR=value" myrepo mybranch
+
+For further information about how to set these variables, please refer to::
+
+ https://docs.gitlab.com/ee/user/project/push_options.html#push-options-for-gitlab-cicd
+
+Setting aliases in your git config
+----------------------------------
+
+You can use aliases to make it easier to push branches with different
+CI configurations. For example define an alias for triggering CI:
+
+.. code::
+
+ git config --local alias.push-ci "push -o ci.variable=QEMU_CI=1"
+ git config --local alias.push-ci-now "push -o ci.variable=QEMU_CI=2"
+
+Which lets you run:
+
+.. code::
+
+ git push-ci
+
+to create the pipeline, or:
+
+.. code::
+
+ git push-ci-now
+
+to create and run the pipeline
+
+
+Variable naming and grouping
+----------------------------
+
+The variables used by QEMU's CI configuration are grouped together
+in a handful of namespaces
+
+ * QEMU_JOB_nnnn - variables to be defined in individual jobs
+ or templates, to influence the shared rules defined in the
+ .base_job_template.
+
+ * QEMU_CI_nnn - variables to be set by contributors in their
+ repository CI settings, or as git push variables, to influence
+ which jobs get run in a pipeline
+
+ * QEMU_CI_CONTAINER_TAG - the tag used to publish containers
+ in stage 1, for use by build jobs in stage 2. Defaults to
+ 'latest', but if running pipelines for different branches
+ concurrently, it should be overridden per pipeline.
+
+ * QEMU_CI_UPSTREAM - gitlab namespace that is considered to be
+ the 'upstream'. This defaults to 'qemu-project'. Contributors
+ may choose to override this if they are modifying rules in
+ base.yml and need to validate how they will operate when in
+ an upstream context, as opposed to their fork context.
+
+ * nnn - other misc variables not falling into the above
+ categories, or using different names for historical reasons
+ and not yet converted.
+
+Maintainer controlled job variables
+-----------------------------------
+
+The following variables may be set when defining a job in the
+CI configuration file.
+
+QEMU_JOB_CIRRUS
+~~~~~~~~~~~~~~~
+
+The job makes use of Cirrus CI infrastructure, requiring the
+configuration setup for cirrus-run to be present in the repository
+
+QEMU_JOB_OPTIONAL
+~~~~~~~~~~~~~~~~~
+
+The job is expected to be successful in general, but is not run
+by default due to need to conserve limited CI resources. It is
+available to be started manually by the contributor in the CI
+pipelines UI.
+
+QEMU_JOB_ONLY_FORKS
+~~~~~~~~~~~~~~~~~~~
+
+The job results are only of interest to contributors prior to
+submitting code. They are not required as part of the gating
+CI pipeline.
+
+QEMU_JOB_SKIPPED
+~~~~~~~~~~~~~~~~
+
+The job is not reliably successful in general, so is not
+currently suitable to be run by default. Ideally this should
+be a temporary marker until the problems can be addressed, or
+the job permanently removed.
+
+QEMU_JOB_PUBLISH
+~~~~~~~~~~~~~~~~
+
+The job is for publishing content after a branch has been
+merged into the upstream default branch.
+
+QEMU_JOB_AVOCADO
+~~~~~~~~~~~~~~~~
+
+The job runs the Avocado integration test suite
+
+Contributor controlled runtime variables
+----------------------------------------
+
+The following variables may be set by contributors to control
+job execution
+
+QEMU_CI
+~~~~~~~
+
+By default, no pipelines will be created on contributor forks
+in order to preserve CI credits
+
+Set this variable to 1 to create the pipelines, but leave all
+the jobs to be manually started from the UI
+
+Set this variable to 2 to create the pipelines and run all
+the jobs immediately, as was the historical behaviour
+
+QEMU_CI_AVOCADO_TESTING
+~~~~~~~~~~~~~~~~~~~~~~~
+By default, tests using the Avocado framework are not run automatically in
+the pipelines (because multiple artifacts have to be downloaded, and if
+these artifacts are not already cached, downloading them make the jobs
+reach the timeout limit). Set this variable to have the tests using the
+Avocado framework run automatically.
+
+Other misc variables
+--------------------
+
+These variables are primarily to control execution of jobs on
+private runners
+
+AARCH64_RUNNER_AVAILABLE
+~~~~~~~~~~~~~~~~~~~~~~~~
+If you've got access to an aarch64 host that can be used as a gitlab-CI
+runner, you can set this variable to enable the tests that require this
+kind of host. The runner should be tagged with "aarch64".
+
+AARCH32_RUNNER_AVAILABLE
+~~~~~~~~~~~~~~~~~~~~~~~~
+If you've got access to an armhf host or an arch64 host that can run
+aarch32 EL0 code to be used as a gitlab-CI runner, you can set this
+variable to enable the tests that require this kind of host. The
+runner should be tagged with "aarch32".
+
+S390X_RUNNER_AVAILABLE
+~~~~~~~~~~~~~~~~~~~~~~
+If you've got access to an IBM Z host that can be used as a gitlab-CI
+runner, you can set this variable to enable the tests that require this
+kind of host. The runner should be tagged with "s390x".
+
+CCACHE_DISABLE
+~~~~~~~~~~~~~~
+The jobs are configured to use "ccache" by default since this typically
+reduces compilation time, at the cost of increased storage. If the
+use of "ccache" is suspected to be hurting the overall job execution
+time, setting the "CCACHE_DISABLE=1" env variable to disable it.
diff --git a/docs/devel/testing/ci-runners.rst.inc b/docs/devel/testing/ci-runners.rst.inc
new file mode 100644
index 0000000000..67b23d3719
--- /dev/null
+++ b/docs/devel/testing/ci-runners.rst.inc
@@ -0,0 +1,116 @@
+Jobs on Custom Runners
+======================
+
+Besides the jobs run under the various CI systems listed before, there
+are a number additional jobs that will run before an actual merge.
+These use the same GitLab CI's service/framework already used for all
+other GitLab based CI jobs, but rely on additional systems, not the
+ones provided by GitLab as "shared runners".
+
+The architecture of GitLab's CI service allows different machines to
+be set up with GitLab's "agent", called gitlab-runner, which will take
+care of running jobs created by events such as a push to a branch.
+Here, the combination of a machine, properly configured with GitLab's
+gitlab-runner, is called a "custom runner".
+
+The GitLab CI jobs definition for the custom runners are located under::
+
+ .gitlab-ci.d/custom-runners.yml
+
+Custom runners entail custom machines. To see a list of the machines
+currently deployed in the QEMU GitLab CI and their maintainers, please
+refer to the QEMU `wiki <https://wiki.qemu.org/AdminContacts>`__.
+
+Machine Setup Howto
+-------------------
+
+For all Linux based systems, the setup can be mostly automated by the
+execution of two Ansible playbooks. Create an ``inventory`` file
+under ``scripts/ci/setup``, such as this::
+
+ fully.qualified.domain
+ other.machine.hostname
+
+You may need to set some variables in the inventory file itself. One
+very common need is to tell Ansible to use a Python 3 interpreter on
+those hosts. This would look like::
+
+ fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
+ other.machine.hostname ansible_python_interpreter=/usr/bin/python3
+
+Build environment
+~~~~~~~~~~~~~~~~~
+
+The ``scripts/ci/setup/$DISTRO/build-environment.yml`` Ansible
+playbook will set up machines with the environment needed to perform
+builds and run QEMU tests. This playbook consists on the installation
+of various required packages (and a general package update while at
+it).
+
+The minimum required version of Ansible successfully tested in this
+playbook is 2.8.0 (a version check is embedded within the playbook
+itself). To run the playbook, execute::
+
+ cd scripts/ci/setup
+ ansible-playbook -i inventory $DISTRO/build-environment.yml
+
+Please note that most of the tasks in the playbook require superuser
+privileges, such as those from the ``root`` account or those obtained
+by ``sudo``. If necessary, please refer to ``ansible-playbook``
+options such as ``--become``, ``--become-method``, ``--become-user``
+and ``--ask-become-pass``.
+
+gitlab-runner setup and registration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The gitlab-runner agent needs to be installed on each machine that
+will run jobs. The association between a machine and a GitLab project
+happens with a registration token. To find the registration token for
+your repository/project, navigate on GitLab's web UI to:
+
+ * Settings (the gears-like icon at the bottom of the left hand side
+ vertical toolbar), then
+ * CI/CD, then
+ * Runners, and click on the "Expand" button, then
+ * Under "Set up a specific Runner manually", look for the value under
+ "And this registration token:"
+
+Copy the ``scripts/ci/setup/vars.yml.template`` file to
+``scripts/ci/setup/vars.yml``. Then, set the
+``gitlab_runner_registration_token`` variable to the value obtained
+earlier.
+
+To run the playbook, execute::
+
+ cd scripts/ci/setup
+ ansible-playbook -i inventory gitlab-runner.yml
+
+Following the registration, it's necessary to configure the runner tags,
+and optionally other configurations on the GitLab UI. Navigate to:
+
+ * Settings (the gears like icon), then
+ * CI/CD, then
+ * Runners, and click on the "Expand" button, then
+ * "Runners activated for this project", then
+ * Click on the "Edit" icon (next to the "Lock" Icon)
+
+Tags are very important as they are used to route specific jobs to
+specific types of runners, so it's a good idea to double check that
+the automatically created tags are consistent with the OS and
+architecture. For instance, an Ubuntu 20.04 aarch64 system should
+have tags set as::
+
+ ubuntu_20.04,aarch64
+
+Because the job definition at ``.gitlab-ci.d/custom-runners.yml``
+would contain::
+
+ ubuntu-20.04-aarch64-all:
+ tags:
+ - ubuntu_20.04
+ - aarch64
+
+It's also recommended to:
+
+ * increase the "Maximum job timeout" to something like ``2h``
+ * give it a better Description
diff --git a/docs/devel/testing/ci.rst b/docs/devel/testing/ci.rst
new file mode 100644
index 0000000000..ed88a2010b
--- /dev/null
+++ b/docs/devel/testing/ci.rst
@@ -0,0 +1,14 @@
+.. _ci:
+
+==
+CI
+==
+
+Most of QEMU's CI is run on GitLab's infrastructure although a number
+of other CI services are used for specialised purposes. The most up to
+date information about them and their status can be found on the
+`project wiki testing page <https://wiki.qemu.org/Testing/CI>`_.
+
+.. include:: ci-definitions.rst.inc
+.. include:: ci-jobs.rst.inc
+.. include:: ci-runners.rst.inc
diff --git a/docs/devel/testing/fuzzing.rst b/docs/devel/testing/fuzzing.rst
new file mode 100644
index 0000000000..3bfcb33fc4
--- /dev/null
+++ b/docs/devel/testing/fuzzing.rst
@@ -0,0 +1,304 @@
+========
+Fuzzing
+========
+
+This document describes the virtual-device fuzzing infrastructure in QEMU and
+how to use it to implement additional fuzzers.
+
+Basics
+------
+
+Fuzzing operates by passing inputs to an entry point/target function. The
+fuzzer tracks the code coverage triggered by the input. Based on these
+findings, the fuzzer mutates the input and repeats the fuzzing.
+
+To fuzz QEMU, we rely on libfuzzer. Unlike other fuzzers such as AFL, libfuzzer
+is an *in-process* fuzzer. For the developer, this means that it is their
+responsibility to ensure that state is reset between fuzzing-runs.
+
+Building the fuzzers
+--------------------
+
+To build the fuzzers, install a recent version of clang:
+Configure with (substitute the clang binaries with the version you installed).
+Here, enable-sanitizers, is optional but it allows us to reliably detect bugs
+such as out-of-bounds accesses, use-after-frees, double-frees etc.::
+
+ CC=clang-8 CXX=clang++-8 /path/to/configure --enable-fuzzing \
+ --enable-sanitizers
+
+Fuzz targets are built similarly to system targets::
+
+ make qemu-fuzz-i386
+
+This builds ``./qemu-fuzz-i386``
+
+The first option to this command is: ``--fuzz-target=FUZZ_NAME``
+To list all of the available fuzzers run ``qemu-fuzz-i386`` with no arguments.
+
+For example::
+
+ ./qemu-fuzz-i386 --fuzz-target=virtio-scsi-fuzz
+
+Internally, libfuzzer parses all arguments that do not begin with ``"--"``.
+Information about these is available by passing ``-help=1``
+
+Now the only thing left to do is wait for the fuzzer to trigger potential
+crashes.
+
+Useful libFuzzer flags
+----------------------
+
+As mentioned above, libFuzzer accepts some arguments. Passing ``-help=1`` will
+list the available arguments. In particular, these arguments might be helpful:
+
+* ``CORPUS_DIR/`` : Specify a directory as the last argument to libFuzzer.
+ libFuzzer stores each "interesting" input in this corpus directory. The next
+ time you run libFuzzer, it will read all of the inputs from the corpus, and
+ continue fuzzing from there. You can also specify multiple directories.
+ libFuzzer loads existing inputs from all specified directories, but will only
+ write new ones to the first one specified.
+
+* ``-max_len=4096`` : specify the maximum byte-length of the inputs libFuzzer
+ will generate.
+
+* ``-close_fd_mask={1,2,3}`` : close, stderr, or both. Useful for targets that
+ trigger many debug/error messages, or create output on the serial console.
+
+* ``-jobs=4 -workers=4`` : These arguments configure libFuzzer to run 4 fuzzers in
+ parallel (4 fuzzing jobs in 4 worker processes). Alternatively, with only
+ ``-jobs=N``, libFuzzer automatically spawns a number of workers less than or equal
+ to half the available CPU cores. Replace 4 with a number appropriate for your
+ machine. Make sure to specify a ``CORPUS_DIR``, which will allow the parallel
+ fuzzers to share information about the interesting inputs they find.
+
+* ``-use_value_profile=1`` : For each comparison operation, libFuzzer computes
+ ``(caller_pc&4095) | (popcnt(Arg1 ^ Arg2) << 12)`` and places this in the
+ coverage table. Useful for targets with "magic" constants. If Arg1 came from
+ the fuzzer's input and Arg2 is a magic constant, then each time the Hamming
+ distance between Arg1 and Arg2 decreases, libFuzzer adds the input to the
+ corpus.
+
+* ``-shrink=1`` : Tries to make elements of the corpus "smaller". Might lead to
+ better coverage performance, depending on the target.
+
+Note that libFuzzer's exact behavior will depend on the version of
+clang and libFuzzer used to build the device fuzzers.
+
+Generating Coverage Reports
+---------------------------
+
+Code coverage is a crucial metric for evaluating a fuzzer's performance.
+libFuzzer's output provides a "cov: " column that provides a total number of
+unique blocks/edges covered. To examine coverage on a line-by-line basis we
+can use Clang coverage:
+
+ 1. Configure libFuzzer to store a corpus of all interesting inputs (see
+ CORPUS_DIR above)
+ 2. ``./configure`` the QEMU build with ::
+
+ --enable-fuzzing \
+ --extra-cflags="-fprofile-instr-generate -fcoverage-mapping"
+
+ 3. Re-run the fuzzer. Specify $CORPUS_DIR/* as an argument, telling libfuzzer
+ to execute all of the inputs in $CORPUS_DIR and exit. Once the process
+ exits, you should find a file, "default.profraw" in the working directory.
+ 4. Execute these commands to generate a detailed HTML coverage-report::
+
+ llvm-profdata merge -output=default.profdata default.profraw
+ llvm-cov show ./path/to/qemu-fuzz-i386 -instr-profile=default.profdata \
+ --format html -output-dir=/path/to/output/report
+
+Adding a new fuzzer
+-------------------
+
+Coverage over virtual devices can be improved by adding additional fuzzers.
+Fuzzers are kept in ``tests/qtest/fuzz/`` and should be added to
+``tests/qtest/fuzz/meson.build``
+
+Fuzzers can rely on both qtest and libqos to communicate with virtual devices.
+
+1. Create a new source file. For example ``tests/qtest/fuzz/foo-device-fuzz.c``.
+
+2. Write the fuzzing code using the libqtest/libqos API. See existing fuzzers
+ for reference.
+
+3. Add the fuzzer to ``tests/qtest/fuzz/meson.build``.
+
+Fuzzers can be more-or-less thought of as special qtest programs which can
+modify the qtest commands and/or qtest command arguments based on inputs
+provided by libfuzzer. Libfuzzer passes a byte array and length. Commonly the
+fuzzer loops over the byte-array interpreting it as a list of qtest commands,
+addresses, or values.
+
+The Generic Fuzzer
+------------------
+
+Writing a fuzz target can be a lot of effort (especially if a device driver has
+not be built-out within libqos). Many devices can be fuzzed to some degree,
+without any device-specific code, using the generic-fuzz target.
+
+The generic-fuzz target is capable of fuzzing devices over their PIO, MMIO,
+and DMA input-spaces. To apply the generic-fuzz to a device, we need to define
+two env-variables, at minimum:
+
+* ``QEMU_FUZZ_ARGS=`` is the set of QEMU arguments used to configure a machine, with
+ the device attached. For example, if we want to fuzz the virtio-net device
+ attached to a pc-i440fx machine, we can specify::
+
+ QEMU_FUZZ_ARGS="-M pc -nodefaults -netdev user,id=user0 \
+ -device virtio-net,netdev=user0"
+
+* ``QEMU_FUZZ_OBJECTS=`` is a set of space-delimited strings used to identify
+ the MemoryRegions that will be fuzzed. These strings are compared against
+ MemoryRegion names and MemoryRegion owner names, to decide whether each
+ MemoryRegion should be fuzzed. These strings support globbing. For the
+ virtio-net example, we could use one of ::
+
+ QEMU_FUZZ_OBJECTS='virtio-net'
+ QEMU_FUZZ_OBJECTS='virtio*'
+ QEMU_FUZZ_OBJECTS='virtio* pcspk' # Fuzz the virtio devices and the speaker
+ QEMU_FUZZ_OBJECTS='*' # Fuzz the whole machine``
+
+The ``"info mtree"`` and ``"info qom-tree"`` monitor commands can be especially
+useful for identifying the ``MemoryRegion`` and ``Object`` names used for
+matching.
+
+As a generic rule-of-thumb, the more ``MemoryRegions``/Devices we match, the
+greater the input-space, and the smaller the probability of finding crashing
+inputs for individual devices. As such, it is usually a good idea to limit the
+fuzzer to only a few ``MemoryRegions``.
+
+To ensure that these env variables have been configured correctly, we can use::
+
+ ./qemu-fuzz-i386 --fuzz-target=generic-fuzz -runs=0
+
+The output should contain a complete list of matched MemoryRegions.
+
+OSS-Fuzz
+--------
+QEMU is continuously fuzzed on `OSS-Fuzz
+<https://github.com/google/oss-fuzz>`_. By default, the OSS-Fuzz build
+will try to fuzz every fuzz-target. Since the generic-fuzz target
+requires additional information provided in environment variables, we
+pre-define some generic-fuzz configs in
+``tests/qtest/fuzz/generic_fuzz_configs.h``. Each config must specify:
+
+- ``.name``: To identify the fuzzer config
+
+- ``.args`` OR ``.argfunc``: A string or pointer to a function returning a
+ string. These strings are used to specify the ``QEMU_FUZZ_ARGS``
+ environment variable. ``argfunc`` is useful when the config relies on e.g.
+ a dynamically created temp directory, or a free tcp/udp port.
+
+- ``.objects``: A string that specifies the ``QEMU_FUZZ_OBJECTS`` environment
+ variable.
+
+To fuzz additional devices/device configuration on OSS-Fuzz, send patches for
+either a new device-specific fuzzer or a new generic-fuzz config.
+
+Build details:
+
+- The Dockerfile that sets up the environment for building QEMU's
+ fuzzers on OSS-Fuzz can be fund in the OSS-Fuzz repository
+ __(https://github.com/google/oss-fuzz/blob/master/projects/qemu/Dockerfile)
+
+- The script responsible for building the fuzzers can be found in the
+ QEMU source tree at ``scripts/oss-fuzz/build.sh``
+
+Building Crash Reproducers
+-----------------------------------------
+When we find a crash, we should try to create an independent reproducer, that
+can be used on a non-fuzzer build of QEMU. This filters out any potential
+false-positives, and improves the debugging experience for developers.
+Here are the steps for building a reproducer for a crash found by the
+generic-fuzz target.
+
+- Ensure the crash reproduces::
+
+ qemu-fuzz-i386 --fuzz-target... ./crash-...
+
+- Gather the QTest output for the crash::
+
+ QEMU_FUZZ_TIMEOUT=0 QTEST_LOG=1 FUZZ_SERIALIZE_QTEST=1 \
+ qemu-fuzz-i386 --fuzz-target... ./crash-... &> /tmp/trace
+
+- Reorder and clean-up the resulting trace::
+
+ scripts/oss-fuzz/reorder_fuzzer_qtest_trace.py /tmp/trace > /tmp/reproducer
+
+- Get the arguments needed to start qemu, and provide a path to qemu::
+
+ less /tmp/trace # The args should be logged at the top of this file
+ export QEMU_ARGS="-machine ..."
+ export QEMU_PATH="path/to/qemu-system"
+
+- Ensure the crash reproduces in qemu-system::
+
+ $QEMU_PATH $QEMU_ARGS -qtest stdio < /tmp/reproducer
+
+- From the crash output, obtain some string that identifies the crash. This
+ can be a line in the stack-trace, for example::
+
+ export CRASH_TOKEN="hw/usb/hcd-xhci.c:1865"
+
+- Minimize the reproducer::
+
+ scripts/oss-fuzz/minimize_qtest_trace.py -M1 -M2 \
+ /tmp/reproducer /tmp/reproducer-minimized
+
+- Confirm that the minimized reproducer still crashes::
+
+ $QEMU_PATH $QEMU_ARGS -qtest stdio < /tmp/reproducer-minimized
+
+- Create a one-liner reproducer that can be sent over email::
+
+ ./scripts/oss-fuzz/output_reproducer.py -bash /tmp/reproducer-minimized
+
+- Output the C source code for a test case that will reproduce the bug::
+
+ ./scripts/oss-fuzz/output_reproducer.py -owner "John Smith <john@smith.com>"\
+ -name "test_function_name" /tmp/reproducer-minimized
+
+- Report the bug and send a patch with the C reproducer upstream
+
+Implementation Details / Fuzzer Lifecycle
+-----------------------------------------
+
+The fuzzer has two entrypoints that libfuzzer calls. libfuzzer provides it's
+own ``main()``, which performs some setup, and calls the entrypoints:
+
+``LLVMFuzzerInitialize``: called prior to fuzzing. Used to initialize all of the
+necessary state
+
+``LLVMFuzzerTestOneInput``: called for each fuzzing run. Processes the input and
+resets the state at the end of each run.
+
+In more detail:
+
+``LLVMFuzzerInitialize`` parses the arguments to the fuzzer (must start with two
+dashes, so they are ignored by libfuzzer ``main()``). Currently, the arguments
+select the fuzz target. Then, the qtest client is initialized. If the target
+requires qos, qgraph is set up and the QOM/LIBQOS modules are initialized.
+Then the QGraph is walked and the QEMU cmd_line is determined and saved.
+
+After this, the ``vl.c:main`` is called to set up the guest. There are
+target-specific hooks that can be called before and after main, for
+additional setup(e.g. PCI setup, or VM snapshotting).
+
+``LLVMFuzzerTestOneInput``: Uses qtest/qos functions to act based on the fuzz
+input. It is also responsible for manually calling ``main_loop_wait`` to ensure
+that bottom halves are executed and any cleanup required before the next input.
+
+Since the same process is reused for many fuzzing runs, QEMU state needs to
+be reset at the end of each run. For example, this can be done by rebooting the
+VM, after each run.
+
+ - *Pros*: Straightforward and fast for simple fuzz targets.
+
+ - *Cons*: Depending on the device, does not reset all device state. If the
+ device requires some initialization prior to being ready for fuzzing (common
+ for QOS-based targets), this initialization needs to be done after each
+ reboot.
+
+ - *Example target*: ``i440fx-qtest-reboot-fuzz``
diff --git a/docs/devel/testing/index.rst b/docs/devel/testing/index.rst
new file mode 100644
index 0000000000..2711fd78b7
--- /dev/null
+++ b/docs/devel/testing/index.rst
@@ -0,0 +1,14 @@
+Testing QEMU
+------------
+
+Details about how to test QEMU and how it is integrated into our CI
+testing infrastructure.
+
+.. toctree::
+ :maxdepth: 3
+
+ main
+ qtest
+ acpi-bits
+ ci
+ fuzzing
diff --git a/docs/devel/testing/main.rst b/docs/devel/testing/main.rst
new file mode 100644
index 0000000000..af73d3d64f
--- /dev/null
+++ b/docs/devel/testing/main.rst
@@ -0,0 +1,1557 @@
+.. _testing:
+
+Testing in QEMU
+===============
+
+QEMU's testing infrastructure is fairly complex as it covers
+everything from unit testing and exercising specific sub-systems all
+the way to full blown acceptance tests. To get an overview of the
+tests you can run ``make check-help`` from either the source or build
+tree.
+
+Most (but not all) tests are also integrated into the meson build
+system so can be run directly from the build tree, for example:
+
+.. code::
+
+ [./pyvenv/bin/]meson test --suite qemu:softfloat
+
+will run just the softfloat tests.
+
+The rest of this document will cover the details for specific test
+groups.
+
+Testing with "make check"
+-------------------------
+
+The "make check" testing family includes most of the C based tests in QEMU.
+
+The usual way to run these tests is:
+
+.. code::
+
+ make check
+
+which includes QAPI schema tests, unit tests, QTests and some iotests.
+Different sub-types of "make check" tests will be explained below.
+
+Before running tests, it is best to build QEMU programs first. Some tests
+expect the executables to exist and will fail with obscure messages if they
+cannot find them.
+
+Unit tests
+~~~~~~~~~~
+
+Unit tests, which can be invoked with ``make check-unit``, are simple C tests
+that typically link to individual QEMU object files and exercise them by
+calling exported functions.
+
+If you are writing new code in QEMU, consider adding a unit test, especially
+for utility modules that are relatively stateless or have few dependencies. To
+add a new unit test:
+
+1. Create a new source file. For example, ``tests/unit/foo-test.c``.
+
+2. Write the test. Normally you would include the header file which exports
+ the module API, then verify the interface behaves as expected from your
+ test. The test code should be organized with the glib testing framework.
+ Copying and modifying an existing test is usually a good idea.
+
+3. Add the test to ``tests/unit/meson.build``. The unit tests are listed in a
+ dictionary called ``tests``. The values are any additional sources and
+ dependencies to be linked with the test. For a simple test whose source
+ is in ``tests/unit/foo-test.c``, it is enough to add an entry like::
+
+ {
+ ...
+ 'foo-test': [],
+ ...
+ }
+
+Since unit tests don't require environment variables, the simplest way to debug
+a unit test failure is often directly invoking it or even running it under
+``gdb``. However there can still be differences in behavior between ``make``
+invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
+variable (which affects memory reclamation and catches invalid pointers better)
+and gtester options. If necessary, you can run
+
+.. code::
+
+ make check-unit V=1
+
+and copy the actual command line which executes the unit test, then run
+it from the command line.
+
+QTest
+~~~~~
+
+QTest is a device emulation testing framework. It can be very useful to test
+device models; it could also control certain aspects of QEMU (such as virtual
+clock stepping), with a special purpose "qtest" protocol. Refer to
+:doc:`qtest` for more details.
+
+QTest cases can be executed with
+
+.. code::
+
+ make check-qtest
+
+Writing portable test cases
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Both unit tests and qtests can run on POSIX hosts as well as Windows hosts.
+Care must be taken when writing portable test cases that can be built and run
+successfully on various hosts. The following list shows some best practices:
+
+* Use portable APIs from glib whenever necessary, e.g.: g_setenv(),
+ g_mkdtemp(), g_mkdir().
+* Avoid using hardcoded /tmp for temporary file directory.
+ Use g_get_tmp_dir() instead.
+* Bear in mind that Windows has different special string representation for
+ stdin/stdout/stderr and null devices. For example if your test case uses
+ "/dev/fd/2" and "/dev/null" on Linux, remember to use "2" and "nul" on
+ Windows instead. Also IO redirection does not work on Windows, so avoid
+ using "2>nul" whenever necessary.
+* If your test cases uses the blkdebug feature, use relative path to pass
+ the config and image file paths in the command line as Windows absolute
+ path contains the delimiter ":" which will confuse the blkdebug parser.
+* Use double quotes in your extra QEMU command line in your test cases
+ instead of single quotes, as Windows does not drop single quotes when
+ passing the command line to QEMU.
+* Windows opens a file in text mode by default, while a POSIX compliant
+ implementation treats text files and binary files the same. So if your
+ test cases opens a file to write some data and later wants to compare the
+ written data with the original one, be sure to pass the letter 'b' as
+ part of the mode string to fopen(), or O_BINARY flag for the open() call.
+* If a certain test case can only run on POSIX or Linux hosts, use a proper
+ #ifdef in the codes. If the whole test suite cannot run on Windows, disable
+ the build in the meson.build file.
+
+QAPI schema tests
+~~~~~~~~~~~~~~~~~
+
+The QAPI schema tests validate the QAPI parser used by QMP, by feeding
+predefined input to the parser and comparing the result with the reference
+output.
+
+The input/output data is managed under the ``tests/qapi-schema`` directory.
+Each test case includes four files that have a common base name:
+
+ * ``${casename}.json`` - the file contains the JSON input for feeding the
+ parser
+ * ``${casename}.out`` - the file contains the expected stdout from the parser
+ * ``${casename}.err`` - the file contains the expected stderr from the parser
+ * ``${casename}.exit`` - the expected error code
+
+Consider adding a new QAPI schema test when you are making a change on the QAPI
+parser (either fixing a bug or extending/modifying the syntax). To do this:
+
+1. Add four files for the new case as explained above. For example:
+
+ ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
+
+2. Add the new test in ``tests/Makefile.include``. For example:
+
+ ``qapi-schema += foo.json``
+
+check-block
+~~~~~~~~~~~
+
+``make check-block`` runs a subset of the block layer iotests (the tests that
+are in the "auto" group).
+See the "QEMU iotests" section below for more information.
+
+QEMU iotests
+------------
+
+QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
+framework widely used to test block layer related features. It is higher level
+than "make check" tests and 99% of the code is written in bash or Python
+scripts. The testing success criteria is golden output comparison, and the
+test files are named with numbers.
+
+To run iotests, make sure QEMU is built successfully, then switch to the
+``tests/qemu-iotests`` directory under the build directory, and run ``./check``
+with desired arguments from there.
+
+By default, "raw" format and "file" protocol is used; all tests will be
+executed, except the unsupported ones. You can override the format and protocol
+with arguments:
+
+.. code::
+
+ # test with qcow2 format
+ ./check -qcow2
+ # or test a different protocol
+ ./check -nbd
+
+It's also possible to list test numbers explicitly:
+
+.. code::
+
+ # run selected cases with qcow2 format
+ ./check -qcow2 001 030 153
+
+Cache mode can be selected with the "-c" option, which may help reveal bugs
+that are specific to certain cache mode.
+
+More options are supported by the ``./check`` script, run ``./check -h`` for
+help.
+
+Writing a new test case
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Consider writing a tests case when you are making any changes to the block
+layer. An iotest case is usually the choice for that. There are already many
+test cases, so it is possible that extending one of them may achieve the goal
+and save the boilerplate to create one. (Unfortunately, there isn't a 100%
+reliable way to find a related one out of hundreds of tests. One approach is
+using ``git grep``.)
+
+Usually an iotest case consists of two files. One is an executable that
+produces output to stdout and stderr, the other is the expected reference
+output. They are given the same number in file names. E.g. Test script ``055``
+and reference output ``055.out``.
+
+In rare cases, when outputs differ between cache mode ``none`` and others, a
+``.out.nocache`` file is added. In other cases, when outputs differ between
+image formats, more than one ``.out`` files are created ending with the
+respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
+
+There isn't a hard rule about how to write a test script, but a new test is
+usually a (copy and) modification of an existing case. There are a few
+commonly used ways to create a test:
+
+* A Bash script. It will make use of several environmental variables related
+ to the testing procedure, and could source a group of ``common.*`` libraries
+ for some common helper routines.
+
+* A Python unittest script. Import ``iotests`` and create a subclass of
+ ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
+ this approach is that the output is too scarce, and the script is considered
+ harder to debug.
+
+* A simple Python script without using unittest module. This could also import
+ ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
+ from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
+ execution. This is a combination of 1 and 2.
+
+Pick the language per your preference since both Bash and Python have
+comparable library support for invoking and interacting with QEMU programs. If
+you opt for Python, it is strongly recommended to write Python 3 compatible
+code.
+
+Both Python and Bash frameworks in iotests provide helpers to manage test
+images. They can be used to create and clean up images under the test
+directory. If no I/O or any protocol specific feature is needed, it is often
+more convenient to use the pseudo block driver, ``null-co://``, as the test
+image, which doesn't require image creation or cleaning up. Avoid system-wide
+devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
+Otherwise, image locking implications have to be considered. For example,
+another application on the host may have locked the file, possibly leading to a
+test failure. If using such devices are explicitly desired, consider adding
+``locking=off`` option to disable image locking.
+
+Debugging a test case
+~~~~~~~~~~~~~~~~~~~~~
+
+The following options to the ``check`` script can be useful when debugging
+a failing test:
+
+* ``-gdb`` wraps every QEMU invocation in a ``gdbserver``, which waits for a
+ connection from a gdb client. The options given to ``gdbserver`` (e.g. the
+ address on which to listen for connections) are taken from the ``$GDB_OPTIONS``
+ environment variable. By default (if ``$GDB_OPTIONS`` is empty), it listens on
+ ``localhost:12345``.
+ It is possible to connect to it for example with
+ ``gdb -iex "target remote $addr"``, where ``$addr`` is the address
+ ``gdbserver`` listens on.
+ If the ``-gdb`` option is not used, ``$GDB_OPTIONS`` is ignored,
+ regardless of whether it is set or not.
+
+* ``-valgrind`` attaches a valgrind instance to QEMU. If it detects
+ warnings, it will print and save the log in
+ ``$TEST_DIR/<valgrind_pid>.valgrind``.
+ The final command line will be ``valgrind --log-file=$TEST_DIR/
+ <valgrind_pid>.valgrind --error-exitcode=99 $QEMU ...``
+
+* ``-d`` (debug) just increases the logging verbosity, showing
+ for example the QMP commands and answers.
+
+* ``-p`` (print) redirects QEMU’s stdout and stderr to the test output,
+ instead of saving it into a log file in
+ ``$TEST_DIR/qemu-machine-<random_string>``.
+
+Test case groups
+~~~~~~~~~~~~~~~~
+
+"Tests may belong to one or more test groups, which are defined in the form
+of a comment in the test source file. By convention, test groups are listed
+in the second line of the test file, after the "#!/..." line, like this:
+
+.. code::
+
+ #!/usr/bin/env python3
+ # group: auto quick
+ #
+ ...
+
+Another way of defining groups is creating the tests/qemu-iotests/group.local
+file. This should be used only for downstream (this file should never appear
+in upstream). This file may be used for defining some downstream test groups
+or for temporarily disabling tests, like this:
+
+.. code::
+
+ # groups for some company downstream process
+ #
+ # ci - tests to run on build
+ # down - our downstream tests, not for upstream
+ #
+ # Format of each line is:
+ # TEST_NAME TEST_GROUP [TEST_GROUP ]...
+
+ 013 ci
+ 210 disabled
+ 215 disabled
+ our-ugly-workaround-test down ci
+
+Note that the following group names have a special meaning:
+
+- quick: Tests in this group should finish within a few seconds.
+
+- auto: Tests in this group are used during "make check" and should be
+ runnable in any case. That means they should run with every QEMU binary
+ (also non-x86), with every QEMU configuration (i.e. must not fail if
+ an optional feature is not compiled in - but reporting a "skip" is ok),
+ work at least with the qcow2 file format, work with all kind of host
+ filesystems and users (e.g. "nobody" or "root") and must not take too
+ much memory and disk space (since CI pipelines tend to fail otherwise).
+
+- disabled: Tests in this group are disabled and ignored by check.
+
+.. _container-ref:
+
+Container based tests
+---------------------
+
+Introduction
+~~~~~~~~~~~~
+
+The container testing framework in QEMU utilizes public images to
+build and test QEMU in predefined and widely accessible Linux
+environments. This makes it possible to expand the test coverage
+across distros, toolchain flavors and library versions. The support
+was originally written for Docker although we also support Podman as
+an alternative container runtime. Although many of the target
+names and scripts are prefixed with "docker" the system will
+automatically run on whichever is configured.
+
+The container images are also used to augment the generation of tests
+for testing TCG. See :ref:`checktcg-ref` for more details.
+
+Docker Prerequisites
+~~~~~~~~~~~~~~~~~~~~
+
+Install "docker" with the system package manager and start the Docker service
+on your development machine, then make sure you have the privilege to run
+Docker commands. Typically it means setting up passwordless ``sudo docker``
+command or login as root. For example:
+
+.. code::
+
+ $ sudo yum install docker
+ $ # or `apt-get install docker` for Ubuntu, etc.
+ $ sudo systemctl start docker
+ $ sudo docker ps
+
+The last command should print an empty table, to verify the system is ready.
+
+An alternative method to set up permissions is by adding the current user to
+"docker" group and making the docker daemon socket file (by default
+``/var/run/docker.sock``) accessible to the group:
+
+.. code::
+
+ $ sudo groupadd docker
+ $ sudo usermod $USER -a -G docker
+ $ sudo chown :docker /var/run/docker.sock
+
+Note that any one of above configurations makes it possible for the user to
+exploit the whole host with Docker bind mounting or other privileged
+operations. So only do it on development machines.
+
+Podman Prerequisites
+~~~~~~~~~~~~~~~~~~~~
+
+Install "podman" with the system package manager.
+
+.. code::
+
+ $ sudo dnf install podman
+ $ podman ps
+
+The last command should print an empty table, to verify the system is ready.
+
+Quickstart
+~~~~~~~~~~
+
+From source tree, type ``make docker-help`` to see the help. Testing
+can be started without configuring or building QEMU (``configure`` and
+``make`` are done in the container, with parameters defined by the
+make target):
+
+.. code::
+
+ make docker-test-build@debian
+
+This will create a container instance using the ``debian`` image (the image
+is downloaded and initialized automatically), in which the ``test-build`` job
+is executed.
+
+Registry
+~~~~~~~~
+
+The QEMU project has a container registry hosted by GitLab at
+``registry.gitlab.com/qemu-project/qemu`` which will automatically be
+used to pull in pre-built layers. This avoids unnecessary strain on
+the distro archives created by multiple developers running the same
+container build steps over and over again. This can be overridden
+locally by using the ``NOCACHE`` build option:
+
+.. code::
+
+ make docker-image-debian-arm64-cross NOCACHE=1
+
+Images
+~~~~~~
+
+Along with many other images, the ``debian`` image is defined in a Dockerfile
+in ``tests/docker/dockerfiles/``, called ``debian.docker``. ``make docker-help``
+command will list all the available images.
+
+A ``.pre`` script can be added beside the ``.docker`` file, which will be
+executed before building the image under the build context directory. This is
+mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
+for example, to make qemu-user powered cross build containers work.
+
+Most of the existing Dockerfiles were written by hand, simply by creating a
+a new ``.docker`` file under the ``tests/docker/dockerfiles/`` directory.
+This has led to an inconsistent set of packages being present across the
+different containers.
+
+Thus going forward, QEMU is aiming to automatically generate the Dockerfiles
+using the ``lcitool`` program provided by the ``libvirt-ci`` project:
+
+ https://gitlab.com/libvirt/libvirt-ci
+
+``libvirt-ci`` contains an ``lcitool`` program as well as a list of
+mappings to distribution package names for a wide variety of third
+party projects. ``lcitool`` applies the mappings to a list of build
+pre-requisites in ``tests/lcitool/projects/qemu.yml``, determines the
+list of native packages to install on each distribution, and uses them
+to generate build environments (dockerfiles and Cirrus CI variable files)
+that are consistent across OS distribution.
+
+
+Adding new build pre-requisites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When preparing a patch series that adds a new build
+pre-requisite to QEMU, the prerequisites should to be added to
+``tests/lcitool/projects/qemu.yml`` in order to make the dependency
+available in the CI build environments.
+
+In the simple case where the pre-requisite is already known to ``libvirt-ci``
+the following steps are needed:
+
+ * Edit ``tests/lcitool/projects/qemu.yml`` and add the pre-requisite
+
+ * Run ``make lcitool-refresh`` to re-generate all relevant build environment
+ manifests
+
+It may be that ``libvirt-ci`` does not know about the new pre-requisite.
+If that is the case, some extra preparation steps will be required
+first to contribute the mapping to the ``libvirt-ci`` project:
+
+ * Fork the ``libvirt-ci`` project on gitlab
+
+ * Add an entry for the new build prerequisite to
+ ``lcitool/facts/mappings.yml``, listing its native package name on as
+ many OS distros as practical. Run ``python -m pytest --regenerate-output``
+ and check that the changes are correct.
+
+ * Commit the ``mappings.yml`` change together with the regenerated test
+ files, and submit a merge request to the ``libvirt-ci`` project.
+ Please note in the description that this is a new build pre-requisite
+ desired for use with QEMU.
+
+ * CI pipeline will run to validate that the changes to ``mappings.yml``
+ are correct, by attempting to install the newly listed package on
+ all OS distributions supported by ``libvirt-ci``.
+
+ * Once the merge request is accepted, go back to QEMU and update
+ the ``tests/lcitool/libvirt-ci`` submodule to point to a commit that
+ contains the ``mappings.yml`` update. Then add the prerequisite and
+ run ``make lcitool-refresh``.
+
+ * Please also trigger gitlab container generation pipelines on your change
+ for as many OS distros as practical to make sure that there are no
+ obvious breakages when adding the new pre-requisite. Please see
+ `CI <https://www.qemu.org/docs/master/devel/ci.html>`__ documentation
+ page on how to trigger gitlab CI pipelines on your change.
+
+ * Please also trigger gitlab container generation pipelines on your change
+ for as many OS distros as practical to make sure that there are no
+ obvious breakages when adding the new pre-requisite. Please see
+ `CI <https://www.qemu.org/docs/master/devel/ci.html>`__ documentation
+ page on how to trigger gitlab CI pipelines on your change.
+
+For enterprise distros that default to old, end-of-life versions of the
+Python runtime, QEMU uses a separate set of mappings that work with more
+recent versions. These can be found in ``tests/lcitool/mappings.yml``.
+Modifying this file should not be necessary unless the new pre-requisite
+is a Python library or tool.
+
+
+Adding new OS distros
+^^^^^^^^^^^^^^^^^^^^^
+
+In some cases ``libvirt-ci`` will not know about the OS distro that is
+desired to be tested. Before adding a new OS distro, discuss the proposed
+addition:
+
+ * Send a mail to qemu-devel, copying people listed in the
+ MAINTAINERS file for ``Build and test automation``.
+
+ There are limited CI compute resources available to QEMU, so the
+ cost/benefit tradeoff of adding new OS distros needs to be considered.
+
+ * File an issue at https://gitlab.com/libvirt/libvirt-ci/-/issues
+ pointing to the qemu-devel mail thread in the archives.
+
+ This alerts other people who might be interested in the work
+ to avoid duplication, as well as to get feedback from libvirt-ci
+ maintainers on any tips to ease the addition
+
+Assuming there is agreement to add a new OS distro then
+
+ * Fork the ``libvirt-ci`` project on gitlab
+
+ * Add metadata under ``lcitool/facts/targets/`` for the new OS
+ distro. There might be code changes required if the OS distro
+ uses a package format not currently known. The ``libvirt-ci``
+ maintainers can advise on this when the issue is filed.
+
+ * Edit the ``lcitool/facts/mappings.yml`` change to add entries for
+ the new OS, listing the native package names for as many packages
+ as practical. Run ``python -m pytest --regenerate-output`` and
+ check that the changes are correct.
+
+ * Commit the changes to ``lcitool/facts`` and the regenerated test
+ files, and submit a merge request to the ``libvirt-ci`` project.
+ Please note in the description that this is a new build pre-requisite
+ desired for use with QEMU
+
+ * CI pipeline will run to validate that the changes to ``mappings.yml``
+ are correct, by attempting to install the newly listed package on
+ all OS distributions supported by ``libvirt-ci``.
+
+ * Once the merge request is accepted, go back to QEMU and update
+ the ``libvirt-ci`` submodule to point to a commit that contains
+ the ``mappings.yml`` update.
+
+
+Tests
+~~~~~
+
+Different tests are added to cover various configurations to build and test
+QEMU. Docker tests are the executables under ``tests/docker`` named
+``test-*``. They are typically shell scripts and are built on top of a shell
+library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
+source and build it.
+
+The full list of tests is printed in the ``make docker-help`` help.
+
+Debugging a Docker test failure
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When CI tasks, maintainers or yourself report a Docker test failure, follow the
+below steps to debug it:
+
+1. Locally reproduce the failure with the reported command line. E.g. run
+ ``make docker-test-mingw@fedora-win64-cross J=8``.
+2. Add "V=1" to the command line, try again, to see the verbose output.
+3. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
+ in the container right before testing starts. You could either manually
+ build QEMU and run tests from there, or press Ctrl-D to let the Docker
+ testing continue.
+4. If you press Ctrl-D, the same building and testing procedure will begin, and
+ will hopefully run into the error again. After that, you will be dropped to
+ the prompt for debug.
+
+Options
+~~~~~~~
+
+Various options can be used to affect how Docker tests are done. The full
+list is in the ``make docker`` help text. The frequently used ones are:
+
+* ``V=1``: the same as in top level ``make``. It will be propagated to the
+ container and enable verbose output.
+* ``J=$N``: the number of parallel tasks in make commands in the container,
+ similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
+ top level ``make`` will not be propagated into the container.)
+* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
+ failure" section.
+
+Thread Sanitizer
+----------------
+
+Thread Sanitizer (TSan) is a tool which can detect data races. QEMU supports
+building and testing with this tool.
+
+For more information on TSan:
+
+https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
+
+Thread Sanitizer in Docker
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+TSan is currently supported in the ubuntu2204 docker.
+
+The test-tsan test will build using TSan and then run make check.
+
+.. code::
+
+ make docker-test-tsan@ubuntu2204
+
+TSan warnings under docker are placed in files located at build/tsan/.
+
+We recommend using DEBUG=1 to allow launching the test from inside the docker,
+and to allow review of the warnings generated by TSan.
+
+Building and Testing with TSan
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is possible to build and test with TSan, with a few additional steps.
+These steps are normally done automatically in the docker.
+
+There is a one time patch needed in clang-9 or clang-10 at this time:
+
+.. code::
+
+ sed -i 's/^const/static const/g' \
+ /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
+
+To configure the build for TSan:
+
+.. code::
+
+ ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
+ --disable-werror --extra-cflags="-O0"
+
+The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
+variable.
+
+More information on the TSAN_OPTIONS can be found here:
+
+https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
+
+For example:
+
+.. code::
+
+ export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
+ detect_deadlocks=false history_size=7 exitcode=0 \
+ log_path=<build path>/tsan/tsan_warning
+
+The above exitcode=0 has TSan continue without error if any warnings are found.
+This allows for running the test and then checking the warnings afterwards.
+If you want TSan to stop and exit with error on warnings, use exitcode=66.
+
+TSan Suppressions
+~~~~~~~~~~~~~~~~~
+Keep in mind that for any data race warning, although there might be a data race
+detected by TSan, there might be no actual bug here. TSan provides several
+different mechanisms for suppressing warnings. In general it is recommended
+to fix the code if possible to eliminate the data race rather than suppress
+the warning.
+
+A few important files for suppressing warnings are:
+
+tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
+The comment on each suppression will typically indicate why we are
+suppressing it. More information on the file format can be found here:
+
+https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
+
+tests/tsan/ignore.tsan - Has TSan warnings we wish to disable
+at compile time for test or debug.
+Add flags to configure to enable:
+
+"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/ignore.tsan"
+
+More information on the file format can be found here under "Blacklist Format":
+
+https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
+
+TSan Annotations
+~~~~~~~~~~~~~~~~
+include/qemu/tsan.h defines annotations. See this file for more descriptions
+of the annotations themselves. Annotations can be used to suppress
+TSan warnings or give TSan more information so that it can detect proper
+relationships between accesses of data.
+
+Annotation examples can be found here:
+
+https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
+
+Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
+
+The full set of annotations can be found here:
+
+https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
+
+docker-binfmt-image-debian-% targets
+------------------------------------
+
+It is possible to combine Debian's bootstrap scripts with a configured
+``binfmt_misc`` to bootstrap a number of Debian's distros including
+experimental ports not yet supported by a released OS. This can
+simplify setting up a rootfs by using docker to contain the foreign
+rootfs rather than manually invoking chroot.
+
+Setting up ``binfmt_misc``
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can use the script ``qemu-binfmt-conf.sh`` to configure a QEMU
+user binary to automatically run binaries for the foreign
+architecture. While the scripts will try their best to work with
+dynamically linked QEMU's a statically linked one will present less
+potential complications when copying into the docker image. Modern
+kernels support the ``F`` (fix binary) flag which will open the QEMU
+executable on setup and avoids the need to find and re-open in the
+chroot environment. This is triggered with the ``--persistent`` flag.
+
+Example invocation
+~~~~~~~~~~~~~~~~~~
+
+For example to setup the HPPA ports builds of Debian::
+
+ make docker-binfmt-image-debian-sid-hppa \
+ DEB_TYPE=sid DEB_ARCH=hppa \
+ DEB_URL=http://ftp.ports.debian.org/debian-ports/ \
+ DEB_KEYRING=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
+ EXECUTABLE=(pwd)/qemu-hppa V=1
+
+The ``DEB_`` variables are substitutions used by
+``debian-bootstrap.pre`` which is called to do the initial debootstrap
+of the rootfs before it is copied into the container. The second stage
+is run as part of the build. The final image will be tagged as
+``qemu/debian-sid-hppa``.
+
+VM testing
+----------
+
+This test suite contains scripts that bootstrap various guest images that have
+necessary packages to build QEMU. The basic usage is documented in ``Makefile``
+help which is displayed with ``make vm-help``.
+
+Quickstart
+~~~~~~~~~~
+
+Run ``make vm-help`` to list available make targets. Invoke a specific make
+command to run build test in an image. For example, ``make vm-build-freebsd``
+will build the source tree in the FreeBSD image. The command can be executed
+from either the source tree or the build dir; if the former, ``./configure`` is
+not needed. The command will then generate the test image in ``./tests/vm/``
+under the working directory.
+
+Note: images created by the scripts accept a well-known RSA key pair for SSH
+access, so they SHOULD NOT be exposed to external interfaces if you are
+concerned about attackers taking control of the guest and potentially
+exploiting a QEMU security bug to compromise the host.
+
+QEMU binaries
+~~~~~~~~~~~~~
+
+By default, ``qemu-system-x86_64`` is searched in $PATH to run the guest. If
+there isn't one, or if it is older than 2.10, the test won't work. In this case,
+provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
+
+Likewise the path to ``qemu-img`` can be set in QEMU_IMG environment variable.
+
+Make jobs
+~~~~~~~~~
+
+The ``-j$X`` option in the make command line is not propagated into the VM,
+specify ``J=$X`` to control the make jobs in the guest.
+
+Debugging
+~~~~~~~~~
+
+Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
+debugging and verbose output. If this is not enough, see the next section.
+``V=1`` will be propagated down into the make jobs in the guest.
+
+Manual invocation
+~~~~~~~~~~~~~~~~~
+
+Each guest script is an executable script with the same command line options.
+For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
+
+.. code::
+
+ $ cd $QEMU_SRC/tests/vm
+
+ # To bootstrap the image
+ $ ./netbsd --build-image --image /var/tmp/netbsd.img
+ <...>
+
+ # To run an arbitrary command in guest (the output will not be echoed unless
+ # --debug is added)
+ $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
+
+ # To build QEMU in guest
+ $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
+
+ # To get to an interactive shell
+ $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
+
+Adding new guests
+~~~~~~~~~~~~~~~~~
+
+Please look at existing guest scripts for how to add new guests.
+
+Most importantly, create a subclass of BaseVM and implement ``build_image()``
+method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
+the script's ``main()``.
+
+* Usually in ``build_image()``, a template image is downloaded from a
+ predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
+ the checksum, so consider using it.
+
+* Once the image is downloaded, users, SSH server and QEMU build deps should
+ be set up:
+
+ - Root password set to ``BaseVM.ROOT_PASS``
+ - User ``BaseVM.GUEST_USER`` is created, and password set to
+ ``BaseVM.GUEST_PASS``
+ - SSH service is enabled and started on boot,
+ ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
+ file of both root and the normal user
+ - DHCP client service is enabled and started on boot, so that it can
+ automatically configure the virtio-net-pci NIC and communicate with QEMU
+ user net (10.0.2.2)
+ - Necessary packages are installed to untar the source tarball and build
+ QEMU
+
+* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
+ untars a raw virtio-blk block device, which is the tarball data blob of the
+ QEMU source tree, then configure/build it. Running "make check" is also
+ recommended.
+
+Image fuzzer testing
+--------------------
+
+An image fuzzer was added to exercise format drivers. Currently only qcow2 is
+supported. To start the fuzzer, run
+
+.. code::
+
+ tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
+
+Alternatively, some command different from ``qemu-img info`` can be tested, by
+changing the ``-c`` option.
+
+Integration tests using the Avocado Framework
+---------------------------------------------
+
+The ``tests/avocado`` directory hosts integration tests. They're usually
+higher level tests, and may interact with external resources and with
+various guest operating systems.
+
+These tests are written using the Avocado Testing Framework (which must
+be installed separately) in conjunction with a the ``avocado_qemu.Test``
+class, implemented at ``tests/avocado/avocado_qemu``.
+
+Tests based on ``avocado_qemu.Test`` can easily:
+
+ * Customize the command line arguments given to the convenience
+ ``self.vm`` attribute (a QEMUMachine instance)
+
+ * Interact with the QEMU monitor, send QMP commands and check
+ their results
+
+ * Interact with the guest OS, using the convenience console device
+ (which may be useful to assert the effectiveness and correctness of
+ command line arguments or QMP commands)
+
+ * Interact with external data files that accompany the test itself
+ (see ``self.get_data()``)
+
+ * Download (and cache) remote data files, such as firmware and kernel
+ images
+
+ * Have access to a library of guest OS images (by means of the
+ ``avocado.utils.vmimage`` library)
+
+ * Make use of various other test related utilities available at the
+ test class itself and at the utility library:
+
+ - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
+ - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
+
+Running tests
+~~~~~~~~~~~~~
+
+You can run the avocado tests simply by executing:
+
+.. code::
+
+ make check-avocado
+
+This involves the automatic installation, from PyPI, of all the
+necessary avocado-framework dependencies into the QEMU venv within the
+build tree (at ``./pyvenv``). Test results are also saved within the
+build tree (at ``tests/results``).
+
+Note: the build environment must be using a Python 3 stack, and have
+the ``venv`` and ``pip`` packages installed. If necessary, make sure
+``configure`` is called with ``--python=`` and that those modules are
+available. On Debian and Ubuntu based systems, depending on the
+specific version, they may be on packages named ``python3-venv`` and
+``python3-pip``.
+
+It is also possible to run tests based on tags using the
+``make check-avocado`` command and the ``AVOCADO_TAGS`` environment
+variable:
+
+.. code::
+
+ make check-avocado AVOCADO_TAGS=quick
+
+Note that tags separated with commas have an AND behavior, while tags
+separated by spaces have an OR behavior. For more information on Avocado
+tags, see:
+
+ https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/tags.html
+
+To run a single test file, a couple of them, or a test within a file
+using the ``make check-avocado`` command, set the ``AVOCADO_TESTS``
+environment variable with the test files or test names. To run all
+tests from a single file, use:
+
+ .. code::
+
+ make check-avocado AVOCADO_TESTS=$FILEPATH
+
+The same is valid to run tests from multiple test files:
+
+ .. code::
+
+ make check-avocado AVOCADO_TESTS='$FILEPATH1 $FILEPATH2'
+
+To run a single test within a file, use:
+
+ .. code::
+
+ make check-avocado AVOCADO_TESTS=$FILEPATH:$TESTCLASS.$TESTNAME
+
+The same is valid to run single tests from multiple test files:
+
+ .. code::
+
+ make check-avocado AVOCADO_TESTS='$FILEPATH1:$TESTCLASS1.$TESTNAME1 $FILEPATH2:$TESTCLASS2.$TESTNAME2'
+
+The scripts installed inside the virtual environment may be used
+without an "activation". For instance, the Avocado test runner
+may be invoked by running:
+
+ .. code::
+
+ pyvenv/bin/avocado run $OPTION1 $OPTION2 tests/avocado/
+
+Note that if ``make check-avocado`` was not executed before, it is
+possible to create the Python virtual environment with the dependencies
+needed running:
+
+ .. code::
+
+ make check-venv
+
+It is also possible to run tests from a single file or a single test within
+a test file. To run tests from a single file within the build tree, use:
+
+ .. code::
+
+ pyvenv/bin/avocado run tests/avocado/$TESTFILE
+
+To run a single test within a test file, use:
+
+ .. code::
+
+ pyvenv/bin/avocado run tests/avocado/$TESTFILE:$TESTCLASS.$TESTNAME
+
+Valid test names are visible in the output from any previous execution
+of Avocado or ``make check-avocado``, and can also be queried using:
+
+ .. code::
+
+ pyvenv/bin/avocado list tests/avocado
+
+Manual Installation
+~~~~~~~~~~~~~~~~~~~
+
+To manually install Avocado and its dependencies, run:
+
+.. code::
+
+ pip install --user avocado-framework
+
+Alternatively, follow the instructions on this link:
+
+ https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html
+
+Overview
+~~~~~~~~
+
+The ``tests/avocado/avocado_qemu`` directory provides the
+``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
+class. Here's a simple usage example:
+
+.. code::
+
+ from avocado_qemu import QemuSystemTest
+
+
+ class Version(QemuSystemTest):
+ """
+ :avocado: tags=quick
+ """
+ def test_qmp_human_info_version(self):
+ self.vm.launch()
+ res = self.vm.cmd('human-monitor-command',
+ command_line='info version')
+ self.assertRegex(res, r'^(\d+\.\d+\.\d)')
+
+To execute your test, run:
+
+.. code::
+
+ avocado run version.py
+
+Tests may be classified according to a convention by using docstring
+directives such as ``:avocado: tags=TAG1,TAG2``. To run all tests
+in the current directory, tagged as "quick", run:
+
+.. code::
+
+ avocado run -t quick .
+
+The ``avocado_qemu.Test`` base test class
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``avocado_qemu.Test`` class has a number of characteristics that
+are worth being mentioned right away.
+
+First of all, it attempts to give each test a ready to use QEMUMachine
+instance, available at ``self.vm``. Because many tests will tweak the
+QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
+is left to the test writer.
+
+The base test class has also support for tests with more than one
+QEMUMachine. The way to get machines is through the ``self.get_vm()``
+method which will return a QEMUMachine instance. The ``self.get_vm()``
+method accepts arguments that will be passed to the QEMUMachine creation
+and also an optional ``name`` attribute so you can identify a specific
+machine and get it more than once through the tests methods. A simple
+and hypothetical example follows:
+
+.. code::
+
+ from avocado_qemu import QemuSystemTest
+
+
+ class MultipleMachines(QemuSystemTest):
+ def test_multiple_machines(self):
+ first_machine = self.get_vm()
+ second_machine = self.get_vm()
+ self.get_vm(name='third_machine').launch()
+
+ first_machine.launch()
+ second_machine.launch()
+
+ first_res = first_machine.cmd(
+ 'human-monitor-command',
+ command_line='info version')
+
+ second_res = second_machine.cmd(
+ 'human-monitor-command',
+ command_line='info version')
+
+ third_res = self.get_vm(name='third_machine').cmd(
+ 'human-monitor-command',
+ command_line='info version')
+
+ self.assertEqual(first_res, second_res, third_res)
+
+At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
+shutdown.
+
+The ``avocado_qemu.LinuxTest`` base test class
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``avocado_qemu.LinuxTest`` is further specialization of the
+``avocado_qemu.Test`` class, so it contains all the characteristics of
+the later plus some extra features.
+
+First of all, this base class is intended for tests that need to
+interact with a fully booted and operational Linux guest. At this
+time, it uses a Fedora 31 guest image. The most basic example looks
+like this:
+
+.. code::
+
+ from avocado_qemu import LinuxTest
+
+
+ class SomeTest(LinuxTest):
+
+ def test(self):
+ self.launch_and_wait()
+ self.ssh_command('some_command_to_be_run_in_the_guest')
+
+Please refer to tests that use ``avocado_qemu.LinuxTest`` under
+``tests/avocado`` for more examples.
+
+QEMUMachine
+~~~~~~~~~~~
+
+The QEMUMachine API is already widely used in the Python iotests,
+device-crash-test and other Python scripts. It's a wrapper around the
+execution of a QEMU binary, giving its users:
+
+ * the ability to set command line arguments to be given to the QEMU
+ binary
+
+ * a ready to use QMP connection and interface, which can be used to
+ send commands and inspect its results, as well as asynchronous
+ events
+
+ * convenience methods to set commonly used command line arguments in
+ a more succinct and intuitive way
+
+QEMU binary selection
+^^^^^^^^^^^^^^^^^^^^^
+
+The QEMU binary used for the ``self.vm`` QEMUMachine instance will
+primarily depend on the value of the ``qemu_bin`` parameter. If it's
+not explicitly set, its default value will be the result of a dynamic
+probe in the same source tree. A suitable binary will be one that
+targets the architecture matching host machine.
+
+Based on this description, test writers will usually rely on one of
+the following approaches:
+
+1) Set ``qemu_bin``, and use the given binary
+
+2) Do not set ``qemu_bin``, and use a QEMU binary named like
+ "qemu-system-${arch}", either in the current
+ working directory, or in the current source tree.
+
+The resulting ``qemu_bin`` value will be preserved in the
+``avocado_qemu.Test`` as an attribute with the same name.
+
+Attribute reference
+~~~~~~~~~~~~~~~~~~~
+
+Test
+^^^^
+
+Besides the attributes and methods that are part of the base
+``avocado.Test`` class, the following attributes are available on any
+``avocado_qemu.Test`` instance.
+
+vm
+''
+
+A QEMUMachine instance, initially configured according to the given
+``qemu_bin`` parameter.
+
+arch
+''''
+
+The architecture can be used on different levels of the stack, e.g. by
+the framework or by the test itself. At the framework level, it will
+currently influence the selection of a QEMU binary (when one is not
+explicitly given).
+
+Tests are also free to use this attribute value, for their own needs.
+A test may, for instance, use the same value when selecting the
+architecture of a kernel or disk image to boot a VM with.
+
+The ``arch`` attribute will be set to the test parameter of the same
+name. If one is not given explicitly, it will either be set to
+``None``, or, if the test is tagged with one (and only one)
+``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
+
+cpu
+'''
+
+The cpu model that will be set to all QEMUMachine instances created
+by the test.
+
+The ``cpu`` attribute will be set to the test parameter of the same
+name. If one is not given explicitly, it will either be set to
+``None ``, or, if the test is tagged with one (and only one)
+``:avocado: tags=cpu:VALUE`` tag, it will be set to ``VALUE``.
+
+machine
+'''''''
+
+The machine type that will be set to all QEMUMachine instances created
+by the test.
+
+The ``machine`` attribute will be set to the test parameter of the same
+name. If one is not given explicitly, it will either be set to
+``None``, or, if the test is tagged with one (and only one)
+``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
+
+qemu_bin
+''''''''
+
+The preserved value of the ``qemu_bin`` parameter or the result of the
+dynamic probe for a QEMU binary in the current working directory or
+source tree.
+
+LinuxTest
+^^^^^^^^^
+
+Besides the attributes present on the ``avocado_qemu.Test`` base
+class, the ``avocado_qemu.LinuxTest`` adds the following attributes:
+
+distro
+''''''
+
+The name of the Linux distribution used as the guest image for the
+test. The name should match the **Provider** column on the list
+of images supported by the avocado.utils.vmimage library:
+
+https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
+
+distro_version
+''''''''''''''
+
+The version of the Linux distribution as the guest image for the
+test. The name should match the **Version** column on the list
+of images supported by the avocado.utils.vmimage library:
+
+https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
+
+distro_checksum
+'''''''''''''''
+
+The sha256 hash of the guest image file used for the test.
+
+If this value is not set in the code or by a test parameter (with the
+same name), no validation on the integrity of the image will be
+performed.
+
+Parameter reference
+~~~~~~~~~~~~~~~~~~~
+
+To understand how Avocado parameters are accessed by tests, and how
+they can be passed to tests, please refer to::
+
+ https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters
+
+Parameter values can be easily seen in the log files, and will look
+like the following:
+
+.. code::
+
+ PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64
+
+Test
+^^^^
+
+arch
+''''
+
+The architecture that will influence the selection of a QEMU binary
+(when one is not explicitly given).
+
+Tests are also free to use this parameter value, for their own needs.
+A test may, for instance, use the same value when selecting the
+architecture of a kernel or disk image to boot a VM with.
+
+This parameter has a direct relation with the ``arch`` attribute. If
+not given, it will default to None.
+
+cpu
+'''
+
+The cpu model that will be set to all QEMUMachine instances created
+by the test.
+
+machine
+'''''''
+
+The machine type that will be set to all QEMUMachine instances created
+by the test.
+
+qemu_bin
+''''''''
+
+The exact QEMU binary to be used on QEMUMachine.
+
+LinuxTest
+^^^^^^^^^
+
+Besides the parameters present on the ``avocado_qemu.Test`` base
+class, the ``avocado_qemu.LinuxTest`` adds the following parameters:
+
+distro
+''''''
+
+The name of the Linux distribution used as the guest image for the
+test. The name should match the **Provider** column on the list
+of images supported by the avocado.utils.vmimage library:
+
+https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
+
+distro_version
+''''''''''''''
+
+The version of the Linux distribution as the guest image for the
+test. The name should match the **Version** column on the list
+of images supported by the avocado.utils.vmimage library:
+
+https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
+
+distro_checksum
+'''''''''''''''
+
+The sha256 hash of the guest image file used for the test.
+
+If this value is not set in the code or by this parameter no
+validation on the integrity of the image will be performed.
+
+Skipping tests
+~~~~~~~~~~~~~~
+
+The Avocado framework provides Python decorators which allow for easily skip
+tests running under certain conditions. For example, on the lack of a binary
+on the test system or when the running environment is a CI system. For further
+information about those decorators, please refer to::
+
+ https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests
+
+While the conditions for skipping tests are often specifics of each one, there
+are recurring scenarios identified by the QEMU developers and the use of
+environment variables became a kind of standard way to enable/disable tests.
+
+Here is a list of the most used variables:
+
+AVOCADO_ALLOW_LARGE_STORAGE
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Tests which are going to fetch or produce assets considered *large* are not
+going to run unless that ``AVOCADO_ALLOW_LARGE_STORAGE=1`` is exported on
+the environment.
+
+The definition of *large* is a bit arbitrary here, but it usually means an
+asset which occupies at least 1GB of size on disk when uncompressed.
+
+SPEED
+^^^^^
+Tests which have a long runtime will not be run unless ``SPEED=slow`` is
+exported on the environment.
+
+The definition of *long* is a bit arbitrary here, and it depends on the
+usefulness of the test too. A unique test is worth spending more time on,
+small variations on existing tests perhaps less so. As a rough guide,
+a test or set of similar tests which take more than 100 seconds to
+complete.
+
+AVOCADO_ALLOW_UNTRUSTED_CODE
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+There are tests which will boot a kernel image or firmware that can be
+considered not safe to run on the developer's workstation, thus they are
+skipped by default. The definition of *not safe* is also arbitrary but
+usually it means a blob which either its source or build process aren't
+public available.
+
+You should export ``AVOCADO_ALLOW_UNTRUSTED_CODE=1`` on the environment in
+order to allow tests which make use of those kind of assets.
+
+AVOCADO_TIMEOUT_EXPECTED
+^^^^^^^^^^^^^^^^^^^^^^^^
+The Avocado framework has a timeout mechanism which interrupts tests to avoid the
+test suite of getting stuck. The timeout value can be set via test parameter or
+property defined in the test class, for further details::
+
+ https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout
+
+Even though the timeout can be set by the test developer, there are some tests
+that may not have a well-defined limit of time to finish under certain
+conditions. For example, tests that take longer to execute when QEMU is
+compiled with debug flags. Therefore, the ``AVOCADO_TIMEOUT_EXPECTED`` variable
+has been used to determine whether those tests should run or not.
+
+QEMU_TEST_FLAKY_TESTS
+^^^^^^^^^^^^^^^^^^^^^
+Some tests are not working reliably and thus are disabled by default.
+This includes tests that don't run reliably on GitLab's CI which
+usually expose real issues that are rarely seen on developer machines
+due to the constraints of the CI environment. If you encounter a
+similar situation then raise a bug and then mark the test as shown on
+the code snippet below:
+
+.. code::
+
+ # See https://gitlab.com/qemu-project/qemu/-/issues/nnnn
+ @skipUnless(os.getenv('QEMU_TEST_FLAKY_TESTS'), 'Test is unstable on GitLab')
+ def test(self):
+ do_something()
+
+You can also add ``:avocado: tags=flaky`` to the test meta-data so
+only the flaky tests can be run as a group:
+
+.. code::
+
+ env QEMU_TEST_FLAKY_TESTS=1 ./pyvenv/bin/avocado \
+ run tests/avocado -filter-by-tags=flaky
+
+Tests should not live in this state forever and should either be fixed
+or eventually removed.
+
+
+Uninstalling Avocado
+~~~~~~~~~~~~~~~~~~~~
+
+If you've followed the manual installation instructions above, you can
+easily uninstall Avocado. Start by listing the packages you have
+installed::
+
+ pip list --user
+
+And remove any package you want with::
+
+ pip uninstall <package_name>
+
+If you've used ``make check-avocado``, the Python virtual environment where
+Avocado is installed will be cleaned up as part of ``make check-clean``.
+
+.. _checktcg-ref:
+
+Testing with "make check-tcg"
+-----------------------------
+
+The check-tcg tests are intended for simple smoke tests of both
+linux-user and softmmu TCG functionality. However to build test
+programs for guest targets you need to have cross compilers available.
+If your distribution supports cross compilers you can do something as
+simple as::
+
+ apt install gcc-aarch64-linux-gnu
+
+The configure script will automatically pick up their presence.
+Sometimes compilers have slightly odd names so the availability of
+them can be prompted by passing in the appropriate configure option
+for the architecture in question, for example::
+
+ $(configure) --cross-cc-aarch64=aarch64-cc
+
+There is also a ``--cross-cc-cflags-ARCH`` flag in case additional
+compiler flags are needed to build for a given target.
+
+If you have the ability to run containers as the user the build system
+will automatically use them where no system compiler is available. For
+architectures where we also support building QEMU we will generally
+use the same container to build tests. However there are a number of
+additional containers defined that have a minimal cross-build
+environment that is only suitable for building test cases. Sometimes
+we may use a bleeding edge distribution for compiler features needed
+for test cases that aren't yet in the LTS distros we support for QEMU
+itself.
+
+See :ref:`container-ref` for more details.
+
+Running subset of tests
+~~~~~~~~~~~~~~~~~~~~~~~
+
+You can build the tests for one architecture::
+
+ make build-tcg-tests-$TARGET
+
+And run with::
+
+ make run-tcg-tests-$TARGET
+
+Adding ``V=1`` to the invocation will show the details of how to
+invoke QEMU for the test which is useful for debugging tests.
+
+Running individual tests
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Tests can also be run directly from the test build directory. If you
+run ``make help`` from the test build directory you will get a list of
+all the tests that can be run. Please note that same binaries are used
+in multiple tests, for example::
+
+ make run-plugin-test-mmap-with-libinline.so
+
+will run the mmap test with the ``libinline.so`` TCG plugin. The
+gdbstub tests also re-use the test binaries but while exercising gdb.
+
+TCG test dependencies
+~~~~~~~~~~~~~~~~~~~~~
+
+The TCG tests are deliberately very light on dependencies and are
+either totally bare with minimal gcc lib support (for system-mode tests)
+or just glibc (for linux-user tests). This is because getting a cross
+compiler to work with additional libraries can be challenging.
+
+Other TCG Tests
+---------------
+
+There are a number of out-of-tree test suites that are used for more
+extensive testing of processor features.
+
+KVM Unit Tests
+~~~~~~~~~~~~~~
+
+The KVM unit tests are designed to run as a Guest OS under KVM but
+there is no reason why they can't exercise the TCG as well. It
+provides a minimal OS kernel with hooks for enabling the MMU as well
+as reporting test results via a special device::
+
+ https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
+
+Linux Test Project
+~~~~~~~~~~~~~~~~~~
+
+The LTP is focused on exercising the syscall interface of a Linux
+kernel. It checks that syscalls behave as documented and strives to
+exercise as many corner cases as possible. It is a useful test suite
+to run to exercise QEMU's linux-user code::
+
+ https://linux-test-project.github.io/
+
+GCC gcov support
+----------------
+
+``gcov`` is a GCC tool to analyze the testing coverage by
+instrumenting the tested code. To use it, configure QEMU with
+``--enable-gcov`` option and build. Then run the tests as usual.
+
+If you want to gather coverage information on a single test the ``make
+clean-gcda`` target can be used to delete any existing coverage
+information before running a single test.
+
+You can generate a HTML coverage report by executing ``make
+coverage-html`` which will create
+``meson-logs/coveragereport/index.html``.
+
+Further analysis can be conducted by running the ``gcov`` command
+directly on the various .gcda output files. Please read the ``gcov``
+documentation for more information.
diff --git a/docs/devel/testing/qgraph.rst b/docs/devel/testing/qgraph.rst
new file mode 100644
index 0000000000..43342d9d65
--- /dev/null
+++ b/docs/devel/testing/qgraph.rst
@@ -0,0 +1,628 @@
+.. _qgraph:
+
+Qtest Driver Framework
+======================
+
+In order to test a specific driver, plain libqos tests need to
+take care of booting QEMU with the right machine and devices.
+This makes each test "hardcoded" for a specific configuration, reducing
+the possible coverage that it can reach.
+
+For example, the sdhci device is supported on both x86_64 and ARM boards,
+therefore a generic sdhci test should test all machines and drivers that
+support that device.
+Using only libqos APIs, the test has to manually take care of
+covering all the setups, and build the correct command line.
+
+This also introduces backward compatibility issues: if a device/driver command
+line name is changed, all tests that use that will not work
+properly anymore and need to be adjusted.
+
+The aim of qgraph is to create a graph of drivers, machines and tests such that
+a test aimed to a certain driver does not have to care of
+booting the right QEMU machine, pick the right device, build the command line
+and so on. Instead, it only defines what type of device it is testing
+(interface in qgraph terms) and the framework takes care of
+covering all supported types of devices and machine architectures.
+
+Following the above example, an interface would be ``sdhci``,
+so the sdhci-test should only care of linking its qgraph node with
+that interface. In this way, if the command line of a sdhci driver
+is changed, only the respective qgraph driver node has to be adjusted.
+
+QGraph concepts
+---------------
+
+The graph is composed by nodes that represent machines, drivers, tests
+and edges that define the relationships between them (``CONSUMES``, ``PRODUCES``, and
+``CONTAINS``).
+
+Nodes
+~~~~~
+
+A node can be of four types:
+
+- **QNODE_MACHINE**: for example ``arm/raspi2b``
+- **QNODE_DRIVER**: for example ``generic-sdhci``
+- **QNODE_INTERFACE**: for example ``sdhci`` (interface for all ``-sdhci``
+ drivers).
+ An interface is not explicitly created, it will be automatically
+ instantiated when a node consumes or produces it.
+ An interface is simply a struct that abstracts the various drivers
+ for the same type of device, and offers an API to the nodes that
+ use it ("consume" relation in qgraph terms) that is implemented/backed up by the drivers that implement it ("produce" relation in qgraph terms).
+- **QNODE_TEST**: for example ``sdhci-test``. A test consumes an interface
+ and tests the functions provided by it.
+
+Notes for the nodes:
+
+- QNODE_MACHINE: each machine struct must have a ``QGuestAllocator`` and
+ implement ``get_driver()`` to return the allocator mapped to the interface
+ "memory". The function can also return ``NULL`` if the allocator
+ is not set.
+- QNODE_DRIVER: driver names must be unique, and machines and nodes
+ planned to be "consumed" by other nodes must match QEMU
+ drivers name, otherwise they won't be discovered
+
+Edges
+~~~~~
+
+An edge relation between two nodes (drivers or machines) ``X`` and ``Y`` can be:
+
+- ``X CONSUMES Y``: ``Y`` can be plugged into ``X``
+- ``X PRODUCES Y``: ``X`` provides the interface ``Y``
+- ``X CONTAINS Y``: ``Y`` is part of ``X`` component
+
+Execution steps
+~~~~~~~~~~~~~~~
+
+The basic framework steps are the following:
+
+- All nodes and edges are created in their respective
+ machine/driver/test files
+- The framework starts QEMU and asks for a list of available devices
+ and machines (note that only machines and "consumed" nodes are mapped
+ 1:1 with QEMU devices)
+- The framework walks the graph starting from the available machines and
+ performs a Depth First Search for tests
+- Once a test is found, the path is walked again and all drivers are
+ allocated accordingly and the final interface is passed to the test
+- The test is executed
+- Unused objects are cleaned and the path discovery is continued
+
+Depending on the QEMU binary used, only some drivers/machines will be
+available and only test that are reached by them will be executed.
+
+Command line
+~~~~~~~~~~~~
+
+Command line is built by using node names and optional arguments
+passed by the user when building the edges.
+
+There are three types of command line arguments:
+
+- ``in node`` : created from the node name. For example, machines will
+ have ``-M <machine>`` to its command line, while devices
+ ``-device <device>``. It is automatically done by the framework.
+- ``after node`` : added as additional argument to the node name.
+ This argument is added optionally when creating edges,
+ by setting the parameter ``after_cmd_line`` and
+ ``extra_edge_opts`` in ``QOSGraphEdgeOptions``.
+ The framework automatically adds
+ a comma before ``extra_edge_opts``,
+ because it is going to add attributes
+ after the destination node pointed by
+ the edge containing these options, and automatically
+ adds a space before ``after_cmd_line``, because it
+ adds an additional device, not an attribute.
+- ``before node`` : added as additional argument to the node name.
+ This argument is added optionally when creating edges,
+ by setting the parameter ``before_cmd_line`` in
+ ``QOSGraphEdgeOptions``. This attribute
+ is going to add attributes before the destination node
+ pointed by the edge containing these options. It is
+ helpful to commands that are not node-representable,
+ such as ``-fdsev`` or ``-netdev``.
+
+While adding command line in edges is always used, not all nodes names are
+used in every path walk: this is because the contained or produced ones
+are already added by QEMU, so only nodes that "consumes" will be used to
+build the command line. Also, nodes that will have ``{ "abstract" : true }``
+as QMP attribute will loose their command line, since they are not proper
+devices to be added in QEMU.
+
+Example::
+
+ QOSGraphEdgeOptions opts = {
+ .before_cmd_line = "-drive id=drv0,if=none,file=null-co://,"
+ "file.read-zeroes=on,format=raw",
+ .after_cmd_line = "-device scsi-hd,bus=vs0.0,drive=drv0",
+
+ opts.extra_device_opts = "id=vs0";
+ };
+
+ qos_node_create_driver("virtio-scsi-device",
+ virtio_scsi_device_create);
+ qos_node_consumes("virtio-scsi-device", "virtio-bus", &opts);
+
+Will produce the following command line:
+``-drive id=drv0,if=none,file=null-co://, -device virtio-scsi-device,id=vs0 -device scsi-hd,bus=vs0.0,drive=drv0``
+
+Troubleshooting unavailable tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If there is no path from an available machine to a test then that test will be
+unavailable and won't execute. This can happen if a test or driver did not set
+up its qgraph node correctly. It can also happen if the necessary machine type
+or device is missing from the QEMU binary because it was compiled out or
+otherwise.
+
+It is possible to troubleshoot unavailable tests by running::
+
+ $ QTEST_QEMU_BINARY=build/qemu-system-x86_64 build/tests/qtest/qos-test --verbose
+ # ALL QGRAPH EDGES: {
+ # src='virtio-net'
+ # |-> dest='virtio-net-tests/vhost-user/multiqueue' type=2 (node=0x559142109e30)
+ # |-> dest='virtio-net-tests/vhost-user/migrate' type=2 (node=0x559142109d00)
+ # src='virtio-net-pci'
+ # |-> dest='virtio-net' type=1 (node=0x55914210d740)
+ # src='pci-bus'
+ # |-> dest='virtio-net-pci' type=2 (node=0x55914210d880)
+ # src='pci-bus-pc'
+ # |-> dest='pci-bus' type=1 (node=0x559142103f40)
+ # src='i440FX-pcihost'
+ # |-> dest='pci-bus-pc' type=0 (node=0x55914210ac70)
+ # src='x86_64/pc'
+ # |-> dest='i440FX-pcihost' type=0 (node=0x5591421117f0)
+ # src=''
+ # |-> dest='x86_64/pc' type=0 (node=0x559142111600)
+ # |-> dest='arm/raspi2b' type=0 (node=0x559142110740)
+ ...
+ # }
+ # ALL QGRAPH NODES: {
+ # name='virtio-net-tests/announce-self' type=3 cmd_line='(null)' [available]
+ # name='arm/raspi2b' type=0 cmd_line='-M raspi2b ' [UNAVAILABLE]
+ ...
+ # }
+
+The ``virtio-net-tests/announce-self`` test is listed as "available" in the
+"ALL QGRAPH NODES" output. This means the test will execute. We can follow the
+qgraph path in the "ALL QGRAPH EDGES" output as follows: '' -> 'x86_64/pc' ->
+'i440FX-pcihost' -> 'pci-bus-pc' -> 'pci-bus' -> 'virtio-net-pci' ->
+'virtio-net'. The root of the qgraph is '' and the depth first search begins
+there.
+
+The ``arm/raspi2b`` machine node is listed as "UNAVAILABLE". Although it is
+reachable from the root via '' -> 'arm/raspi2b' the node is unavailable because
+the QEMU binary did not list it when queried by the framework. This is expected
+because we used the ``qemu-system-x86_64`` binary which does not support ARM
+machine types.
+
+If a test is unexpectedly listed as "UNAVAILABLE", first check that the "ALL
+QGRAPH EDGES" output reports edge connectivity from the root ('') to the test.
+If there is no connectivity then the qgraph nodes were not set up correctly and
+the driver or test code is incorrect. If there is connectivity, check the
+availability of each node in the path in the "ALL QGRAPH NODES" output. The
+first unavailable node in the path is the reason why the test is unavailable.
+Typically this is because the QEMU binary lacks support for the necessary
+machine type or device.
+
+Creating a new driver and its interface
+---------------------------------------
+
+Here we continue the ``sdhci`` use case, with the following scenario:
+
+- ``sdhci-test`` aims to test the ``read[q,w], writeq`` functions
+ offered by the ``sdhci`` drivers.
+- The current ``sdhci`` device is supported by both ``x86_64/pc`` and ``ARM``
+ (in this example we focus on the ``arm-raspi2b``) machines.
+- QEMU offers 2 types of drivers: ``QSDHCI_MemoryMapped`` for ``ARM`` and
+ ``QSDHCI_PCI`` for ``x86_64/pc``. Both implement the
+ ``read[q,w], writeq`` functions.
+
+In order to implement such scenario in qgraph, the test developer needs to:
+
+- Create the ``x86_64/pc`` machine node. This machine uses the
+ ``pci-bus`` architecture so it ``contains`` a PCI driver,
+ ``pci-bus-pc``. The actual path is
+
+ ``x86_64/pc --contains--> 1440FX-pcihost --contains-->
+ pci-bus-pc --produces--> pci-bus``.
+
+ For the sake of this example,
+ we do not focus on the PCI interface implementation.
+- Create the ``sdhci-pci`` driver node, representing ``QSDHCI_PCI``.
+ The driver uses the PCI bus (and its API),
+ so it must ``consume`` the ``pci-bus`` generic interface (which abstracts
+ all the pci drivers available)
+
+ ``sdhci-pci --consumes--> pci-bus``
+- Create an ``arm/raspi2b`` machine node. This machine ``contains``
+ a ``generic-sdhci`` memory mapped ``sdhci`` driver node, representing
+ ``QSDHCI_MemoryMapped``.
+
+ ``arm/raspi2b --contains--> generic-sdhci``
+- Create the ``sdhci`` interface node. This interface offers the
+ functions that are shared by all ``sdhci`` devices.
+ The interface is produced by ``sdhci-pci`` and ``generic-sdhci``,
+ the available architecture-specific drivers.
+
+ ``sdhci-pci --produces--> sdhci``
+
+ ``generic-sdhci --produces--> sdhci``
+- Create the ``sdhci-test`` test node. The test ``consumes`` the
+ ``sdhci`` interface, using its API. It doesn't need to look at
+ the supported machines or drivers.
+
+ ``sdhci-test --consumes--> sdhci``
+
+``arm-raspi2b`` machine, simplified from
+``tests/qtest/libqos/arm-raspi2-machine.c``::
+
+ #include "qgraph.h"
+
+ struct QRaspi2Machine {
+ QOSGraphObject obj;
+ QGuestAllocator alloc;
+ QSDHCI_MemoryMapped sdhci;
+ };
+
+ static void *raspi2_get_driver(void *object, const char *interface)
+ {
+ QRaspi2Machine *machine = object;
+ if (!g_strcmp0(interface, "memory")) {
+ return &machine->alloc;
+ }
+
+ fprintf(stderr, "%s not present in arm/raspi2b\n", interface);
+ g_assert_not_reached();
+ }
+
+ static QOSGraphObject *raspi2_get_device(void *obj,
+ const char *device)
+ {
+ QRaspi2Machine *machine = obj;
+ if (!g_strcmp0(device, "generic-sdhci")) {
+ return &machine->sdhci.obj;
+ }
+
+ fprintf(stderr, "%s not present in arm/raspi2b\n", device);
+ g_assert_not_reached();
+ }
+
+ static void *qos_create_machine_arm_raspi2(QTestState *qts)
+ {
+ QRaspi2Machine *machine = g_new0(QRaspi2Machine, 1);
+
+ alloc_init(&machine->alloc, ...);
+
+ /* Get node(s) contained inside (CONTAINS) */
+ machine->obj.get_device = raspi2_get_device;
+
+ /* Get node(s) produced (PRODUCES) */
+ machine->obj.get_driver = raspi2_get_driver;
+
+ /* free the object */
+ machine->obj.destructor = raspi2_destructor;
+ qos_init_sdhci_mm(&machine->sdhci, ...);
+ return &machine->obj;
+ }
+
+ static void raspi2_register_nodes(void)
+ {
+ /* arm/raspi2b --contains--> generic-sdhci */
+ qos_node_create_machine("arm/raspi2b",
+ qos_create_machine_arm_raspi2);
+ qos_node_contains("arm/raspi2b", "generic-sdhci", NULL);
+ }
+
+ libqos_init(raspi2_register_nodes);
+
+``x86_64/pc`` machine, simplified from
+``tests/qtest/libqos/x86_64_pc-machine.c``::
+
+ #include "qgraph.h"
+
+ struct i440FX_pcihost {
+ QOSGraphObject obj;
+ QPCIBusPC pci;
+ };
+
+ struct QX86PCMachine {
+ QOSGraphObject obj;
+ QGuestAllocator alloc;
+ i440FX_pcihost bridge;
+ };
+
+ /* i440FX_pcihost */
+
+ static QOSGraphObject *i440FX_host_get_device(void *obj,
+ const char *device)
+ {
+ i440FX_pcihost *host = obj;
+ if (!g_strcmp0(device, "pci-bus-pc")) {
+ return &host->pci.obj;
+ }
+ fprintf(stderr, "%s not present in i440FX-pcihost\n", device);
+ g_assert_not_reached();
+ }
+
+ /* x86_64/pc machine */
+
+ static void *pc_get_driver(void *object, const char *interface)
+ {
+ QX86PCMachine *machine = object;
+ if (!g_strcmp0(interface, "memory")) {
+ return &machine->alloc;
+ }
+
+ fprintf(stderr, "%s not present in x86_64/pc\n", interface);
+ g_assert_not_reached();
+ }
+
+ static QOSGraphObject *pc_get_device(void *obj, const char *device)
+ {
+ QX86PCMachine *machine = obj;
+ if (!g_strcmp0(device, "i440FX-pcihost")) {
+ return &machine->bridge.obj;
+ }
+
+ fprintf(stderr, "%s not present in x86_64/pc\n", device);
+ g_assert_not_reached();
+ }
+
+ static void *qos_create_machine_pc(QTestState *qts)
+ {
+ QX86PCMachine *machine = g_new0(QX86PCMachine, 1);
+
+ /* Get node(s) contained inside (CONTAINS) */
+ machine->obj.get_device = pc_get_device;
+
+ /* Get node(s) produced (PRODUCES) */
+ machine->obj.get_driver = pc_get_driver;
+
+ /* free the object */
+ machine->obj.destructor = pc_destructor;
+ pc_alloc_init(&machine->alloc, qts, ALLOC_NO_FLAGS);
+
+ /* Get node(s) contained inside (CONTAINS) */
+ machine->bridge.obj.get_device = i440FX_host_get_device;
+
+ return &machine->obj;
+ }
+
+ static void pc_machine_register_nodes(void)
+ {
+ /* x86_64/pc --contains--> 1440FX-pcihost --contains-->
+ * pci-bus-pc [--produces--> pci-bus (in pci.h)] */
+ qos_node_create_machine("x86_64/pc", qos_create_machine_pc);
+ qos_node_contains("x86_64/pc", "i440FX-pcihost", NULL);
+
+ /* contained drivers don't need a constructor,
+ * they will be init by the parent */
+ qos_node_create_driver("i440FX-pcihost", NULL);
+ qos_node_contains("i440FX-pcihost", "pci-bus-pc", NULL);
+ }
+
+ libqos_init(pc_machine_register_nodes);
+
+``sdhci`` taken from ``tests/qtest/libqos/sdhci.c``::
+
+ /* Interface node, offers the sdhci API */
+ struct QSDHCI {
+ uint16_t (*readw)(QSDHCI *s, uint32_t reg);
+ uint64_t (*readq)(QSDHCI *s, uint32_t reg);
+ void (*writeq)(QSDHCI *s, uint32_t reg, uint64_t val);
+ /* other fields */
+ };
+
+ /* Memory Mapped implementation of QSDHCI */
+ struct QSDHCI_MemoryMapped {
+ QOSGraphObject obj;
+ QSDHCI sdhci;
+ /* other driver-specific fields */
+ };
+
+ /* PCI implementation of QSDHCI */
+ struct QSDHCI_PCI {
+ QOSGraphObject obj;
+ QSDHCI sdhci;
+ /* other driver-specific fields */
+ };
+
+ /* Memory mapped implementation of QSDHCI */
+
+ static void *sdhci_mm_get_driver(void *obj, const char *interface)
+ {
+ QSDHCI_MemoryMapped *smm = obj;
+ if (!g_strcmp0(interface, "sdhci")) {
+ return &smm->sdhci;
+ }
+ fprintf(stderr, "%s not present in generic-sdhci\n", interface);
+ g_assert_not_reached();
+ }
+
+ void qos_init_sdhci_mm(QSDHCI_MemoryMapped *sdhci, QTestState *qts,
+ uint32_t addr, QSDHCIProperties *common)
+ {
+ /* Get node contained inside (CONTAINS) */
+ sdhci->obj.get_driver = sdhci_mm_get_driver;
+
+ /* SDHCI interface API */
+ sdhci->sdhci.readw = sdhci_mm_readw;
+ sdhci->sdhci.readq = sdhci_mm_readq;
+ sdhci->sdhci.writeq = sdhci_mm_writeq;
+ sdhci->qts = qts;
+ }
+
+ /* PCI implementation of QSDHCI */
+
+ static void *sdhci_pci_get_driver(void *object,
+ const char *interface)
+ {
+ QSDHCI_PCI *spci = object;
+ if (!g_strcmp0(interface, "sdhci")) {
+ return &spci->sdhci;
+ }
+
+ fprintf(stderr, "%s not present in sdhci-pci\n", interface);
+ g_assert_not_reached();
+ }
+
+ static void *sdhci_pci_create(void *pci_bus,
+ QGuestAllocator *alloc,
+ void *addr)
+ {
+ QSDHCI_PCI *spci = g_new0(QSDHCI_PCI, 1);
+ QPCIBus *bus = pci_bus;
+ uint64_t barsize;
+
+ qpci_device_init(&spci->dev, bus, addr);
+
+ /* SDHCI interface API */
+ spci->sdhci.readw = sdhci_pci_readw;
+ spci->sdhci.readq = sdhci_pci_readq;
+ spci->sdhci.writeq = sdhci_pci_writeq;
+
+ /* Get node(s) produced (PRODUCES) */
+ spci->obj.get_driver = sdhci_pci_get_driver;
+
+ spci->obj.start_hw = sdhci_pci_start_hw;
+ spci->obj.destructor = sdhci_destructor;
+ return &spci->obj;
+ }
+
+ static void qsdhci_register_nodes(void)
+ {
+ QOSGraphEdgeOptions opts = {
+ .extra_device_opts = "addr=04.0",
+ };
+
+ /* generic-sdhci */
+ /* generic-sdhci --produces--> sdhci */
+ qos_node_create_driver("generic-sdhci", NULL);
+ qos_node_produces("generic-sdhci", "sdhci");
+
+ /* sdhci-pci */
+ /* sdhci-pci --produces--> sdhci
+ * sdhci-pci --consumes--> pci-bus */
+ qos_node_create_driver("sdhci-pci", sdhci_pci_create);
+ qos_node_produces("sdhci-pci", "sdhci");
+ qos_node_consumes("sdhci-pci", "pci-bus", &opts);
+ }
+
+ libqos_init(qsdhci_register_nodes);
+
+In the above example, all possible types of relations are created::
+
+ x86_64/pc --contains--> 1440FX-pcihost --contains--> pci-bus-pc
+ |
+ sdhci-pci --consumes--> pci-bus <--produces--+
+ |
+ +--produces--+
+ |
+ v
+ sdhci
+ ^
+ |
+ +--produces-- +
+ |
+ arm/raspi2b --contains--> generic-sdhci
+
+or inverting the consumes edge in consumed_by::
+
+ x86_64/pc --contains--> 1440FX-pcihost --contains--> pci-bus-pc
+ |
+ sdhci-pci <--consumed by-- pci-bus <--produces--+
+ |
+ +--produces--+
+ |
+ v
+ sdhci
+ ^
+ |
+ +--produces-- +
+ |
+ arm/raspi2b --contains--> generic-sdhci
+
+Adding a new test
+-----------------
+
+Given the above setup, adding a new test is very simple.
+``sdhci-test``, taken from ``tests/qtest/sdhci-test.c``::
+
+ static void check_capab_sdma(QSDHCI *s, bool supported)
+ {
+ uint64_t capab, capab_sdma;
+
+ capab = s->readq(s, SDHC_CAPAB);
+ capab_sdma = FIELD_EX64(capab, SDHC_CAPAB, SDMA);
+ g_assert_cmpuint(capab_sdma, ==, supported);
+ }
+
+ static void test_registers(void *obj, void *data,
+ QGuestAllocator *alloc)
+ {
+ QSDHCI *s = obj;
+
+ /* example test */
+ check_capab_sdma(s, s->props.capab.sdma);
+ }
+
+ static void register_sdhci_test(void)
+ {
+ /* sdhci-test --consumes--> sdhci */
+ qos_add_test("registers", "sdhci", test_registers, NULL);
+ }
+
+ libqos_init(register_sdhci_test);
+
+Here a new test is created, consuming ``sdhci`` interface node
+and creating a valid path from both machines to a test.
+Final graph will be like this::
+
+ x86_64/pc --contains--> 1440FX-pcihost --contains--> pci-bus-pc
+ |
+ sdhci-pci --consumes--> pci-bus <--produces--+
+ |
+ +--produces--+
+ |
+ v
+ sdhci <--consumes-- sdhci-test
+ ^
+ |
+ +--produces-- +
+ |
+ arm/raspi2b --contains--> generic-sdhci
+
+or inverting the consumes edge in consumed_by::
+
+ x86_64/pc --contains--> 1440FX-pcihost --contains--> pci-bus-pc
+ |
+ sdhci-pci <--consumed by-- pci-bus <--produces--+
+ |
+ +--produces--+
+ |
+ v
+ sdhci --consumed by--> sdhci-test
+ ^
+ |
+ +--produces-- +
+ |
+ arm/raspi2b --contains--> generic-sdhci
+
+Assuming there the binary is
+``QTEST_QEMU_BINARY=./qemu-system-x86_64``
+a valid test path will be:
+``/x86_64/pc/1440FX-pcihost/pci-bus-pc/pci-bus/sdhci-pc/sdhci/sdhci-test``
+
+and for the binary ``QTEST_QEMU_BINARY=./qemu-system-arm``:
+
+``/arm/raspi2b/generic-sdhci/sdhci/sdhci-test``
+
+Additional examples are also in ``test-qgraph.c``
+
+Qgraph API reference
+--------------------
+
+.. kernel-doc:: tests/qtest/libqos/qgraph.h
diff --git a/docs/devel/testing/qtest.rst b/docs/devel/testing/qtest.rst
new file mode 100644
index 0000000000..c5b8546b3e
--- /dev/null
+++ b/docs/devel/testing/qtest.rst
@@ -0,0 +1,91 @@
+========================================
+QTest Device Emulation Testing Framework
+========================================
+
+.. toctree::
+
+ qgraph
+
+QTest is a device emulation testing framework. It can be very useful to test
+device models; it could also control certain aspects of QEMU (such as virtual
+clock stepping), with a special purpose "qtest" protocol. Refer to
+:ref:`qtest-protocol` for more details of the protocol.
+
+QTest cases can be executed with
+
+.. code::
+
+ make check-qtest
+
+The QTest library is implemented by ``tests/qtest/libqtest.c`` and the API is
+defined in ``tests/qtest/libqtest.h``.
+
+Consider adding a new QTest case when you are introducing a new virtual
+hardware, or extending one if you are adding functionalities to an existing
+virtual device.
+
+On top of libqtest, a higher level library, ``libqos``, was created to
+encapsulate common tasks of device drivers, such as memory management and
+communicating with system buses or devices. Many virtual device tests use
+libqos instead of directly calling into libqtest.
+Libqos also offers the Qgraph API to increase each test coverage and
+automate QEMU command line arguments and devices setup.
+Refer to :ref:`qgraph` for Qgraph explanation and API.
+
+Steps to add a new QTest case are:
+
+1. Create a new source file for the test. (More than one file can be added as
+ necessary.) For example, ``tests/qtest/foo-test.c``.
+
+2. Write the test code with the glib and libqtest/libqos API. See also existing
+ tests and the library headers for reference.
+
+3. Register the new test in ``tests/qtest/meson.build``. Add the test
+ executable name to an appropriate ``qtests_*`` variable. There is
+ one variable per architecture, plus ``qtests_generic`` for tests
+ that can be run for all architectures. For example::
+
+ qtests_generic = [
+ ...
+ 'foo-test',
+ ...
+ ]
+
+4. If the test has more than one source file or needs to be linked with any
+ dependency other than ``qemuutil`` and ``qos``, list them in the ``qtests``
+ dictionary. For example a test that needs to use the ``QIO`` library
+ will have an entry like::
+
+ {
+ ...
+ 'foo-test': [io],
+ ...
+ }
+
+Debugging a QTest failure is slightly harder than the unit test because the
+tests look up QEMU program names in the environment variables, such as
+``QTEST_QEMU_BINARY`` and ``QTEST_QEMU_IMG``, and also because it is not easy
+to attach gdb to the QEMU process spawned from the test. But manual invoking
+and using gdb on the test is still simple to do: find out the actual command
+from the output of
+
+.. code::
+
+ make check-qtest V=1
+
+which you can run manually.
+
+
+.. _qtest-protocol:
+
+QTest Protocol
+--------------
+
+.. kernel-doc:: system/qtest.c
+ :doc: QTest Protocol
+
+
+libqtest API reference
+----------------------
+
+.. kernel-doc:: tests/qtest/libqtest.h