aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMarcoFalke <falke.marco@gmail.com>2019-07-28 10:12:46 -0400
committerMarcoFalke <falke.marco@gmail.com>2019-07-28 10:12:49 -0400
commit3489b715120b1bf7559fbddf03cd87b889fdca47 (patch)
tree8ef9ad91358bea2e2e043b70391e624090010e62
parent94df084f2ac591b1e7e2d4b621b764a87110ebce (diff)
parentbf3be5297a746982cf8e83f45d342121e5665f80 (diff)
downloadbitcoin-3489b715120b1bf7559fbddf03cd87b889fdca47.tar.xz
Merge #16464: [qa] Ensure we don't generate a too-big block in p2sh sigops test
bf3be5297a746982cf8e83f45d342121e5665f80 [qa] Ensure we don't generate a too-big block in p2sh sigops test (Suhas Daftuar) Pull request description: There's a bug in the loop that is calculating the block size in the p2sh sigops test -- we start with the size of the block when it has no transactions, and then increment by the size of each transaction we add, without regard to the changing size of the encoding for the number of transactions in the block. This might be fine if the block construction were deterministic, but the first transaction in the block has an ECDSA signature which can be variable length, so we see intermittent failures of this test when the initial transaction has a 70-byte signature and the block ends up being one byte too big. Fix this by double-checking the block size after construction. ACKs for top commit: jonasschnelli: utACK bf3be5297a746982cf8e83f45d342121e5665f80 jnewbery: tested ACK bf3be5297a746982cf8e83f45d342121e5665f80 Tree-SHA512: f86385b96f7a6feafa4183727f5f2c9aae8ad70060b574aad13b150f174a17ce9a0040bc51ae7a04bd08f2a5298b983a84b0aed5e86a8440189ebc63b99e64dc
-rwxr-xr-xtest/functional/feature_block.py8
1 files changed, 8 insertions, 0 deletions
diff --git a/test/functional/feature_block.py b/test/functional/feature_block.py
index b5eac88ba7..fdb608d457 100755
--- a/test/functional/feature_block.py
+++ b/test/functional/feature_block.py
@@ -486,6 +486,14 @@ class FullBlockTest(BitcoinTestFramework):
tx_last = tx_new
b39_outputs += 1
+ # The accounting in the loop above can be off, because it misses the
+ # compact size encoding of the number of transactions in the block.
+ # Make sure we didn't accidentally make too big a block. Note that the
+ # size of the block has non-determinism due to the ECDSA signature in
+ # the first transaction.
+ while (len(b39.serialize()) >= MAX_BLOCK_BASE_SIZE):
+ del b39.vtx[-1]
+
b39 = self.update_block(39, [])
self.send_blocks([b39], True)
self.save_spendable_output()