aboutsummaryrefslogtreecommitdiff
path: root/src/test/blockchain_tests.cpp
diff options
context:
space:
mode:
author0xb10c <b10c@b10c.me>2024-04-02 15:16:58 +0200
committer0xb10c <b10c@b10c.me>2024-09-03 14:15:37 +0200
commitcd0edf26c07c8c615f3ae3ac040c4774dcc8e650 (patch)
tree97d004ef7af2e5224ec856f797e1002de5c92e52 /src/test/blockchain_tests.cpp
parent9cb9651d92ddb5d92724f6a52440601c7a0bbcf8 (diff)
tracing: cast block_connected duration to nanoseconds
When the tracepoint was introduced in 8f37f5c2a562c38c83fc40234ade9c301fc4e685, the connect_block duration was passed in microseconds `µs`. By starting to use steady clock in fabf1cdb206e368a9433abf99a5ea2762a5ed2c0 this changed to nanoseconds `ns`. As the test only checked if the duration value is `> 0` as a plausibility check, this went unnoticed. I detected this when setting up monitoring for block validation time as part of the Great Consensus Cleanup Revival discussion. This change casts the duration explicitly to nanoseconds (as it has been nanoseconds for the last three releases; switching back now would 'break' the broken API again; there don't seem to be many users affected), updates the documentation and adds a check for an upper bound to the tracepoint interface tests. The upper bound is quite lax as mining the block takes much longer than connecting the empty test block. It's however able to detect incorrect duration units passed.
Diffstat (limited to 'src/test/blockchain_tests.cpp')
0 files changed, 0 insertions, 0 deletions