diff options
author | 0xb10c <b10c@b10c.me> | 2024-04-02 15:16:58 +0200 |
---|---|---|
committer | 0xb10c <b10c@b10c.me> | 2024-09-03 14:15:37 +0200 |
commit | cd0edf26c07c8c615f3ae3ac040c4774dcc8e650 (patch) | |
tree | 97d004ef7af2e5224ec856f797e1002de5c92e52 /doc/tracing.md | |
parent | 9cb9651d92ddb5d92724f6a52440601c7a0bbcf8 (diff) |
tracing: cast block_connected duration to nanoseconds
When the tracepoint was introduced in 8f37f5c2a562c38c83fc40234ade9c301fc4e685,
the connect_block duration was passed in microseconds `µs`.
By starting to use steady clock in fabf1cdb206e368a9433abf99a5ea2762a5ed2c0
this changed to nanoseconds `ns`. As the test only checked if the
duration value is `> 0` as a plausibility check, this went unnoticed.
I detected this when setting up monitoring for block validation time
as part of the Great Consensus Cleanup Revival discussion.
This change casts the duration explicitly to nanoseconds (as it has been
nanoseconds for the last three releases; switching back now would 'break'
the broken API again; there don't seem to be many users affected), updates
the documentation and adds a check for an upper bound to the tracepoint
interface tests. The upper bound is quite lax as mining the block takes
much longer than connecting the empty test block. It's however able to
detect incorrect duration units passed.
Diffstat (limited to 'doc/tracing.md')
-rw-r--r-- | doc/tracing.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/doc/tracing.md b/doc/tracing.md index 3948b1ab49..c12af122db 100644 --- a/doc/tracing.md +++ b/doc/tracing.md @@ -106,7 +106,7 @@ Arguments passed: 3. Transactions in the Block as `uint64` 4. Inputs spend in the Block as `int32` 5. SigOps in the Block (excluding coinbase SigOps) `uint64` -6. Time it took to connect the Block in microseconds (µs) as `uint64` +6. Time it took to connect the Block in nanoseconds (ns) as `uint64` ### Context `utxocache` |