diff options
author | 0xb10c <b10c@b10c.me> | 2024-04-02 15:16:58 +0200 |
---|---|---|
committer | 0xb10c <b10c@b10c.me> | 2024-09-03 14:15:37 +0200 |
commit | cd0edf26c07c8c615f3ae3ac040c4774dcc8e650 (patch) | |
tree | 97d004ef7af2e5224ec856f797e1002de5c92e52 /test/lint/commit-script-check.sh | |
parent | 9cb9651d92ddb5d92724f6a52440601c7a0bbcf8 (diff) |
tracing: cast block_connected duration to nanoseconds
When the tracepoint was introduced in 8f37f5c2a562c38c83fc40234ade9c301fc4e685,
the connect_block duration was passed in microseconds `µs`.
By starting to use steady clock in fabf1cdb206e368a9433abf99a5ea2762a5ed2c0
this changed to nanoseconds `ns`. As the test only checked if the
duration value is `> 0` as a plausibility check, this went unnoticed.
I detected this when setting up monitoring for block validation time
as part of the Great Consensus Cleanup Revival discussion.
This change casts the duration explicitly to nanoseconds (as it has been
nanoseconds for the last three releases; switching back now would 'break'
the broken API again; there don't seem to be many users affected), updates
the documentation and adds a check for an upper bound to the tracepoint
interface tests. The upper bound is quite lax as mining the block takes
much longer than connecting the empty test block. It's however able to
detect incorrect duration units passed.
Diffstat (limited to 'test/lint/commit-script-check.sh')
0 files changed, 0 insertions, 0 deletions