aboutsummaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorfanquake <fanquake@gmail.com>2022-08-30 15:34:10 +0100
committerfanquake <fanquake@gmail.com>2022-08-30 15:37:59 +0100
commite9035f867a36a430998e3811385958229ac79cf5 (patch)
treeccd1df3e2a12e7ed0dd143e59cc17bf7d7d71331 /src
parentcfda740b332c77800f9bb2506d840dad3f4938c0 (diff)
parent3add23454624c4c79c9eebc060b6fbed4e3131a7 (diff)
downloadbitcoin-e9035f867a36a430998e3811385958229ac79cf5.tar.xz
Merge bitcoin/bitcoin#25717: p2p: Implement anti-DoS headers sync
3add23454624c4c79c9eebc060b6fbed4e3131a7 ui: show header pre-synchronization progress (Pieter Wuille) 738421c50f2dbd7395b50a5dbdf6168b07435e62 Emit NotifyHeaderTip signals for pre-synchronization progress (Pieter Wuille) 376086fc5a187f5b2ab3a0d1202ed4e6c22bdb50 Make validation interface capable of signalling header presync (Pieter Wuille) 93eae27031a65b4156df49015ae45b2b541b4e5a Test large reorgs with headerssync logic (Suhas Daftuar) 355547334f7d08640ee1fa291227356d61145d1a Track headers presync progress and log it (Pieter Wuille) 03712dddfbb9fe0dc7a2ead53c65106189f5c803 Expose HeadersSyncState::m_current_height in getpeerinfo() (Suhas Daftuar) 150a5486db50ff77c91765392149000029c8a309 Test headers sync using minchainwork threshold (Suhas Daftuar) 0b6aa826b53470c9cc8ef4a153fa710dce80882f Add unit test for HeadersSyncState (Suhas Daftuar) 83c6a0c5249c4ecbd11f7828c84a50fb473faba3 Reduce spurious messages during headers sync (Suhas Daftuar) ed6cddd98e32263fc116a4380af6d66da20da990 Require callers of AcceptBlockHeader() to perform anti-dos checks (Suhas Daftuar) 551a8d957c4c44afbd0d608fcdf7c6a4352babce Utilize anti-DoS headers download strategy (Suhas Daftuar) ed470940cddbeb40425960d51cefeec4948febe4 Add functions to construct locators without CChain (Pieter Wuille) 84852bb6bb3579e475ce78fe729fd125ddbc715f Add bitdeque, an std::deque<bool> analogue that does bit packing. (Pieter Wuille) 1d4cfa4272cf2c8b980cc8762c1ff2220d3e8d51 Add function to validate difficulty changes (Suhas Daftuar) Pull request description: New nodes starting up for the first time lack protection against DoS from low-difficulty headers. While checkpoints serve as our protection against headers that fork from the main chain below the known checkpointed values, this protection only applies to nodes that have been able to download the honest chain to the checkpointed heights. We can protect all nodes from DoS from low-difficulty headers by adopting a different strategy: before we commit to storing a header in permanent storage, first verify that the header is part of a chain that has sufficiently high work (either `nMinimumChainWork`, or something comparable to our tip). This means that we will download headers from a given peer twice: once to verify the work on the chain, and a second time when permanently storing the headers. The p2p protocol doesn't provide an easy way for us to ensure that we receive the same headers during the second download of peer's headers chain. To ensure that a peer doesn't (say) give us the main chain in phase 1 to trick us into permanently storing an alternate, low-work chain in phase 2, we store commitments to the headers during our first download, which we validate in the second download. Some parameters must be chosen for commitment size/frequency in phase 1, and validation of commitments in phase 2. In this PR, those parameters are chosen to both (a) minimize the per-peer memory usage that an attacker could utilize, and (b) bound the expected amount of permanent memory that an attacker could get us to use to be well-below the memory growth that we'd get from the honest chain (where we expect 1 new block header every 10 minutes). After this PR, we should be able to remove checkpoints from our code, which is a nice philosophical change for us to make as well, as there has been confusion over the years about the role checkpoints play in Bitcoin's consensus algorithm. Thanks to Pieter Wuille for collaborating on this design. ACKs for top commit: Sjors: re-tACK 3add23454624c4c79c9eebc060b6fbed4e3131a7 mzumsande: re-ACK 3add23454624c4c79c9eebc060b6fbed4e3131a7 sipa: re-ACK 3add23454624c4c79c9eebc060b6fbed4e3131a7 glozow: ACK 3add234546 Tree-SHA512: e7789d65f62f72141b8899eb4a2fb3d0621278394d2d7adaa004675250118f89a4e4cb42777fe56649d744ec445ad95141e10f6def65f0a58b7b35b2e654a875
Diffstat (limited to 'src')
-rw-r--r--src/Makefile.am3
-rw-r--r--src/Makefile.test.include2
-rw-r--r--src/bitcoin-chainstate.cpp5
-rw-r--r--src/chain.cpp47
-rw-r--r--src/chain.h10
-rw-r--r--src/consensus/validation.h1
-rw-r--r--src/headerssync.cpp317
-rw-r--r--src/headerssync.h277
-rw-r--r--src/interfaces/node.h2
-rw-r--r--src/logging.cpp3
-rw-r--r--src/logging.h1
-rw-r--r--src/net_processing.cpp459
-rw-r--r--src/net_processing.h1
-rw-r--r--src/node/interface_ui.cpp2
-rw-r--r--src/node/interface_ui.h2
-rw-r--r--src/node/interfaces.cpp10
-rw-r--r--src/pow.cpp51
-rw-r--r--src/pow.h14
-rw-r--r--src/qt/bitcoin.cpp2
-rw-r--r--src/qt/bitcoingui.cpp26
-rw-r--r--src/qt/bitcoingui.h5
-rw-r--r--src/qt/clientmodel.cpp18
-rw-r--r--src/qt/clientmodel.h10
-rw-r--r--src/qt/modaloverlay.cpp12
-rw-r--r--src/qt/modaloverlay.h3
-rw-r--r--src/qt/rpcconsole.cpp6
-rw-r--r--src/qt/rpcconsole.h4
-rw-r--r--src/qt/sendcoinsdialog.cpp2
-rw-r--r--src/qt/sendcoinsdialog.h4
-rw-r--r--src/rpc/mining.cpp6
-rw-r--r--src/rpc/net.cpp2
-rw-r--r--src/test/blockfilter_index_tests.cpp8
-rw-r--r--src/test/coinstatsindex_tests.cpp2
-rw-r--r--src/test/fuzz/bitdeque.cpp542
-rw-r--r--src/test/fuzz/pow.cpp37
-rw-r--r--src/test/fuzz/utxo_snapshot.cpp2
-rw-r--r--src/test/headers_sync_chainwork_tests.cpp146
-rw-r--r--src/test/miner_tests.cpp2
-rw-r--r--src/test/pow_tests.cpp27
-rw-r--r--src/test/skiplist_tests.cpp2
-rw-r--r--src/test/util/mining.cpp2
-rw-r--r--src/test/util/setup_common.cpp2
-rw-r--r--src/test/util_tests.cpp1
-rw-r--r--src/test/validation_block_tests.cpp10
-rw-r--r--src/test/validation_chainstate_tests.cpp2
-rw-r--r--src/util/bitdeque.h469
-rw-r--r--src/validation.cpp66
-rw-r--r--src/validation.h33
48 files changed, 2519 insertions, 141 deletions
diff --git a/src/Makefile.am b/src/Makefile.am
index 18c6c25b96..bf26cc9674 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -151,6 +151,7 @@ BITCOIN_CORE_H = \
external_signer.h \
flatfile.h \
fs.h \
+ headerssync.h \
httprpc.h \
httpserver.h \
i2p.h \
@@ -264,6 +265,7 @@ BITCOIN_CORE_H = \
undo.h \
util/asmap.h \
util/bip32.h \
+ util/bitdeque.h \
util/bytevectorhash.h \
util/check.h \
util/epochguard.h \
@@ -360,6 +362,7 @@ libbitcoin_node_a_SOURCES = \
dbwrapper.cpp \
deploymentstatus.cpp \
flatfile.cpp \
+ headerssync.cpp \
httprpc.cpp \
httpserver.cpp \
i2p.cpp \
diff --git a/src/Makefile.test.include b/src/Makefile.test.include
index 22a95f9682..8a2386a2b4 100644
--- a/src/Makefile.test.include
+++ b/src/Makefile.test.include
@@ -93,6 +93,7 @@ BITCOIN_TESTS =\
test/fs_tests.cpp \
test/getarg_tests.cpp \
test/hash_tests.cpp \
+ test/headers_sync_chainwork_tests.cpp \
test/httpserver_tests.cpp \
test/i2p_tests.cpp \
test/interfaces_tests.cpp \
@@ -235,6 +236,7 @@ test_fuzz_fuzz_SOURCES = \
test/fuzz/banman.cpp \
test/fuzz/base_encode_decode.cpp \
test/fuzz/bech32.cpp \
+ test/fuzz/bitdeque.cpp \
test/fuzz/block.cpp \
test/fuzz/block_header.cpp \
test/fuzz/blockfilter.cpp \
diff --git a/src/bitcoin-chainstate.cpp b/src/bitcoin-chainstate.cpp
index 7312ae45d4..f3bd543de8 100644
--- a/src/bitcoin-chainstate.cpp
+++ b/src/bitcoin-chainstate.cpp
@@ -195,7 +195,7 @@ int main(int argc, char* argv[])
bool new_block;
auto sc = std::make_shared<submitblock_StateCatcher>(block.GetHash());
RegisterSharedValidationInterface(sc);
- bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*new_block=*/&new_block);
+ bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*min_pow_checked=*/true, /*new_block=*/&new_block);
UnregisterSharedValidationInterface(sc);
if (!new_block && accepted) {
std::cerr << "duplicate" << std::endl;
@@ -210,6 +210,9 @@ int main(int argc, char* argv[])
case BlockValidationResult::BLOCK_RESULT_UNSET:
std::cerr << "initial value. Block has not yet been rejected" << std::endl;
break;
+ case BlockValidationResult::BLOCK_HEADER_LOW_WORK:
+ std::cerr << "the block header may be on a too-little-work chain" << std::endl;
+ break;
case BlockValidationResult::BLOCK_CONSENSUS:
std::cerr << "invalid by consensus rules (excluding any below reasons)" << std::endl;
break;
diff --git a/src/chain.cpp b/src/chain.cpp
index 446bb216c2..19c35b5012 100644
--- a/src/chain.cpp
+++ b/src/chain.cpp
@@ -28,32 +28,33 @@ void CChain::SetTip(CBlockIndex& block)
}
}
-CBlockLocator CChain::GetLocator(const CBlockIndex *pindex) const {
- int nStep = 1;
- std::vector<uint256> vHave;
- vHave.reserve(32);
-
- if (!pindex)
- pindex = Tip();
- while (pindex) {
- vHave.push_back(pindex->GetBlockHash());
- // Stop when we have added the genesis block.
- if (pindex->nHeight == 0)
- break;
+std::vector<uint256> LocatorEntries(const CBlockIndex* index)
+{
+ int step = 1;
+ std::vector<uint256> have;
+ if (index == nullptr) return have;
+
+ have.reserve(32);
+ while (index) {
+ have.emplace_back(index->GetBlockHash());
+ if (index->nHeight == 0) break;
// Exponentially larger steps back, plus the genesis block.
- int nHeight = std::max(pindex->nHeight - nStep, 0);
- if (Contains(pindex)) {
- // Use O(1) CChain index if possible.
- pindex = (*this)[nHeight];
- } else {
- // Otherwise, use O(log n) skiplist.
- pindex = pindex->GetAncestor(nHeight);
- }
- if (vHave.size() > 10)
- nStep *= 2;
+ int height = std::max(index->nHeight - step, 0);
+ // Use skiplist.
+ index = index->GetAncestor(height);
+ if (have.size() > 10) step *= 2;
}
+ return have;
+}
- return CBlockLocator(vHave);
+CBlockLocator GetLocator(const CBlockIndex* index)
+{
+ return CBlockLocator{std::move(LocatorEntries(index))};
+}
+
+CBlockLocator CChain::GetLocator() const
+{
+ return ::GetLocator(Tip());
}
const CBlockIndex *CChain::FindFork(const CBlockIndex *pindex) const {
diff --git a/src/chain.h b/src/chain.h
index cc1d9e2d22..2d3b084b9b 100644
--- a/src/chain.h
+++ b/src/chain.h
@@ -473,8 +473,8 @@ public:
/** Set/initialize a chain with a given tip. */
void SetTip(CBlockIndex& block);
- /** Return a CBlockLocator that refers to a block in this chain (by default the tip). */
- CBlockLocator GetLocator(const CBlockIndex* pindex = nullptr) const;
+ /** Return a CBlockLocator that refers to the tip in of this chain. */
+ CBlockLocator GetLocator() const;
/** Find the last common block between this chain and a block index entry. */
const CBlockIndex* FindFork(const CBlockIndex* pindex) const;
@@ -483,4 +483,10 @@ public:
CBlockIndex* FindEarliestAtLeast(int64_t nTime, int height) const;
};
+/** Get a locator for a block index entry. */
+CBlockLocator GetLocator(const CBlockIndex* index);
+
+/** Construct a list of hash entries to put in a locator. */
+std::vector<uint256> LocatorEntries(const CBlockIndex* index);
+
#endif // BITCOIN_CHAIN_H
diff --git a/src/consensus/validation.h b/src/consensus/validation.h
index 6027bb9aeb..9c0aa09356 100644
--- a/src/consensus/validation.h
+++ b/src/consensus/validation.h
@@ -79,6 +79,7 @@ enum class BlockValidationResult {
BLOCK_INVALID_PREV, //!< A block this one builds on is invalid
BLOCK_TIME_FUTURE, //!< block timestamp was > 2 hours in the future (or our clock is bad)
BLOCK_CHECKPOINT, //!< the block failed to meet one of our checkpoints
+ BLOCK_HEADER_LOW_WORK //!< the block header may be on a too-little-work chain
};
diff --git a/src/headerssync.cpp b/src/headerssync.cpp
new file mode 100644
index 0000000000..3eca492d81
--- /dev/null
+++ b/src/headerssync.cpp
@@ -0,0 +1,317 @@
+// Copyright (c) 2022 The Bitcoin Core developers
+// Distributed under the MIT software license, see the accompanying
+// file COPYING or http://www.opensource.org/licenses/mit-license.php.
+
+#include <headerssync.h>
+#include <logging.h>
+#include <pow.h>
+#include <timedata.h>
+#include <util/check.h>
+
+// The two constants below are computed using the simulation script on
+// https://gist.github.com/sipa/016ae445c132cdf65a2791534dfb7ae1
+
+//! Store a commitment to a header every HEADER_COMMITMENT_PERIOD blocks.
+constexpr size_t HEADER_COMMITMENT_PERIOD{584};
+
+//! Only feed headers to validation once this many headers on top have been
+//! received and validated against commitments.
+constexpr size_t REDOWNLOAD_BUFFER_SIZE{13959}; // 13959/584 = ~23.9 commitments
+
+// Our memory analysis assumes 48 bytes for a CompressedHeader (so we should
+// re-calculate parameters if we compress further)
+static_assert(sizeof(CompressedHeader) == 48);
+
+HeadersSyncState::HeadersSyncState(NodeId id, const Consensus::Params& consensus_params,
+ const CBlockIndex* chain_start, const arith_uint256& minimum_required_work) :
+ m_id(id), m_consensus_params(consensus_params),
+ m_chain_start(chain_start),
+ m_minimum_required_work(minimum_required_work),
+ m_current_chain_work(chain_start->nChainWork),
+ m_commit_offset(GetRand<unsigned>(HEADER_COMMITMENT_PERIOD)),
+ m_last_header_received(m_chain_start->GetBlockHeader()),
+ m_current_height(chain_start->nHeight)
+{
+ // Estimate the number of blocks that could possibly exist on the peer's
+ // chain *right now* using 6 blocks/second (fastest blockrate given the MTP
+ // rule) times the number of seconds from the last allowed block until
+ // today. This serves as a memory bound on how many commitments we might
+ // store from this peer, and we can safely give up syncing if the peer
+ // exceeds this bound, because it's not possible for a consensus-valid
+ // chain to be longer than this (at the current time -- in the future we
+ // could try again, if necessary, to sync a longer chain).
+ m_max_commitments = 6*(Ticks<std::chrono::seconds>(GetAdjustedTime() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}) + MAX_FUTURE_BLOCK_TIME) / HEADER_COMMITMENT_PERIOD;
+
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync started with peer=%d: height=%i, max_commitments=%i, min_work=%s\n", m_id, m_current_height, m_max_commitments, m_minimum_required_work.ToString());
+}
+
+/** Free any memory in use, and mark this object as no longer usable. This is
+ * required to guarantee that we won't reuse this object with the same
+ * SaltedTxidHasher for another sync. */
+void HeadersSyncState::Finalize()
+{
+ Assume(m_download_state != State::FINAL);
+ m_header_commitments = {};
+ m_last_header_received.SetNull();
+ m_redownloaded_headers = {};
+ m_redownload_buffer_last_hash.SetNull();
+ m_redownload_buffer_first_prev_hash.SetNull();
+ m_process_all_remaining_headers = false;
+ m_current_height = 0;
+
+ m_download_state = State::FINAL;
+}
+
+/** Process the next batch of headers received from our peer.
+ * Validate and store commitments, and compare total chainwork to our target to
+ * see if we can switch to REDOWNLOAD mode. */
+HeadersSyncState::ProcessingResult HeadersSyncState::ProcessNextHeaders(const
+ std::vector<CBlockHeader>& received_headers, const bool full_headers_message)
+{
+ ProcessingResult ret;
+
+ Assume(!received_headers.empty());
+ if (received_headers.empty()) return ret;
+
+ Assume(m_download_state != State::FINAL);
+ if (m_download_state == State::FINAL) return ret;
+
+ if (m_download_state == State::PRESYNC) {
+ // During PRESYNC, we minimally validate block headers and
+ // occasionally add commitments to them, until we reach our work
+ // threshold (at which point m_download_state is updated to REDOWNLOAD).
+ ret.success = ValidateAndStoreHeadersCommitments(received_headers);
+ if (ret.success) {
+ if (full_headers_message || m_download_state == State::REDOWNLOAD) {
+ // A full headers message means the peer may have more to give us;
+ // also if we just switched to REDOWNLOAD then we need to re-request
+ // headers from the beginning.
+ ret.request_more = true;
+ } else {
+ Assume(m_download_state == State::PRESYNC);
+ // If we're in PRESYNC and we get a non-full headers
+ // message, then the peer's chain has ended and definitely doesn't
+ // have enough work, so we can stop our sync.
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: incomplete headers message at height=%i (presync phase)\n", m_id, m_current_height);
+ }
+ }
+ } else if (m_download_state == State::REDOWNLOAD) {
+ // During REDOWNLOAD, we compare our stored commitments to what we
+ // receive, and add headers to our redownload buffer. When the buffer
+ // gets big enough (meaning that we've checked enough commitments),
+ // we'll return a batch of headers to the caller for processing.
+ ret.success = true;
+ for (const auto& hdr : received_headers) {
+ if (!ValidateAndStoreRedownloadedHeader(hdr)) {
+ // Something went wrong -- the peer gave us an unexpected chain.
+ // We could consider looking at the reason for failure and
+ // punishing the peer, but for now just give up on sync.
+ ret.success = false;
+ break;
+ }
+ }
+
+ if (ret.success) {
+ // Return any headers that are ready for acceptance.
+ ret.pow_validated_headers = PopHeadersReadyForAcceptance();
+
+ // If we hit our target blockhash, then all remaining headers will be
+ // returned and we can clear any leftover internal state.
+ if (m_redownloaded_headers.empty() && m_process_all_remaining_headers) {
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync complete with peer=%d: releasing all at height=%i (redownload phase)\n", m_id, m_redownload_buffer_last_height);
+ } else if (full_headers_message) {
+ // If the headers message is full, we need to request more.
+ ret.request_more = true;
+ } else {
+ // For some reason our peer gave us a high-work chain, but is now
+ // declining to serve us that full chain again. Give up.
+ // Note that there's no more processing to be done with these
+ // headers, so we can still return success.
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: incomplete headers message at height=%i (redownload phase)\n", m_id, m_redownload_buffer_last_height);
+ }
+ }
+ }
+
+ if (!(ret.success && ret.request_more)) Finalize();
+ return ret;
+}
+
+bool HeadersSyncState::ValidateAndStoreHeadersCommitments(const std::vector<CBlockHeader>& headers)
+{
+ // The caller should not give us an empty set of headers.
+ Assume(headers.size() > 0);
+ if (headers.size() == 0) return true;
+
+ Assume(m_download_state == State::PRESYNC);
+ if (m_download_state != State::PRESYNC) return false;
+
+ if (headers[0].hashPrevBlock != m_last_header_received.GetHash()) {
+ // Somehow our peer gave us a header that doesn't connect.
+ // This might be benign -- perhaps our peer reorged away from the chain
+ // they were on. Give up on this sync for now (likely we will start a
+ // new sync with a new starting point).
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (presync phase)\n", m_id, m_current_height);
+ return false;
+ }
+
+ // If it does connect, (minimally) validate and occasionally store
+ // commitments.
+ for (const auto& hdr : headers) {
+ if (!ValidateAndProcessSingleHeader(hdr)) {
+ return false;
+ }
+ }
+
+ if (m_current_chain_work >= m_minimum_required_work) {
+ m_redownloaded_headers.clear();
+ m_redownload_buffer_last_height = m_chain_start->nHeight;
+ m_redownload_buffer_first_prev_hash = m_chain_start->GetBlockHash();
+ m_redownload_buffer_last_hash = m_chain_start->GetBlockHash();
+ m_redownload_chain_work = m_chain_start->nChainWork;
+ m_download_state = State::REDOWNLOAD;
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync transition with peer=%d: reached sufficient work at height=%i, redownloading from height=%i\n", m_id, m_current_height, m_redownload_buffer_last_height);
+ }
+ return true;
+}
+
+bool HeadersSyncState::ValidateAndProcessSingleHeader(const CBlockHeader& current)
+{
+ Assume(m_download_state == State::PRESYNC);
+ if (m_download_state != State::PRESYNC) return false;
+
+ int next_height = m_current_height + 1;
+
+ // Verify that the difficulty isn't growing too fast; an adversary with
+ // limited hashing capability has a greater chance of producing a high
+ // work chain if they compress the work into as few blocks as possible,
+ // so don't let anyone give a chain that would violate the difficulty
+ // adjustment maximum.
+ if (!PermittedDifficultyTransition(m_consensus_params, next_height,
+ m_last_header_received.nBits, current.nBits)) {
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: invalid difficulty transition at height=%i (presync phase)\n", m_id, next_height);
+ return false;
+ }
+
+ if (next_height % HEADER_COMMITMENT_PERIOD == m_commit_offset) {
+ // Add a commitment.
+ m_header_commitments.push_back(m_hasher(current.GetHash()) & 1);
+ if (m_header_commitments.size() > m_max_commitments) {
+ // The peer's chain is too long; give up.
+ // It's possible the chain grew since we started the sync; so
+ // potentially we could succeed in syncing the peer's chain if we
+ // try again later.
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: exceeded max commitments at height=%i (presync phase)\n", m_id, next_height);
+ return false;
+ }
+ }
+
+ m_current_chain_work += GetBlockProof(CBlockIndex(current));
+ m_last_header_received = current;
+ m_current_height = next_height;
+
+ return true;
+}
+
+bool HeadersSyncState::ValidateAndStoreRedownloadedHeader(const CBlockHeader& header)
+{
+ Assume(m_download_state == State::REDOWNLOAD);
+ if (m_download_state != State::REDOWNLOAD) return false;
+
+ int64_t next_height = m_redownload_buffer_last_height + 1;
+
+ // Ensure that we're working on a header that connects to the chain we're
+ // downloading.
+ if (header.hashPrevBlock != m_redownload_buffer_last_hash) {
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (redownload phase)\n", m_id, next_height);
+ return false;
+ }
+
+ // Check that the difficulty adjustments are within our tolerance:
+ uint32_t previous_nBits{0};
+ if (!m_redownloaded_headers.empty()) {
+ previous_nBits = m_redownloaded_headers.back().nBits;
+ } else {
+ previous_nBits = m_chain_start->nBits;
+ }
+
+ if (!PermittedDifficultyTransition(m_consensus_params, next_height,
+ previous_nBits, header.nBits)) {
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: invalid difficulty transition at height=%i (redownload phase)\n", m_id, next_height);
+ return false;
+ }
+
+ // Track work on the redownloaded chain
+ m_redownload_chain_work += GetBlockProof(CBlockIndex(header));
+
+ if (m_redownload_chain_work >= m_minimum_required_work) {
+ m_process_all_remaining_headers = true;
+ }
+
+ // If we're at a header for which we previously stored a commitment, verify
+ // it is correct. Failure will result in aborting download.
+ // Also, don't check commitments once we've gotten to our target blockhash;
+ // it's possible our peer has extended its chain between our first sync and
+ // our second, and we don't want to return failure after we've seen our
+ // target blockhash just because we ran out of commitments.
+ if (!m_process_all_remaining_headers && next_height % HEADER_COMMITMENT_PERIOD == m_commit_offset) {
+ if (m_header_commitments.size() == 0) {
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: commitment overrun at height=%i (redownload phase)\n", m_id, next_height);
+ // Somehow our peer managed to feed us a different chain and
+ // we've run out of commitments.
+ return false;
+ }
+ bool commitment = m_hasher(header.GetHash()) & 1;
+ bool expected_commitment = m_header_commitments.front();
+ m_header_commitments.pop_front();
+ if (commitment != expected_commitment) {
+ LogPrint(BCLog::HEADERSSYNC, "Initial headers sync aborted with peer=%d: commitment mismatch at height=%i (redownload phase)\n", m_id, next_height);
+ return false;
+ }
+ }
+
+ // Store this header for later processing.
+ m_redownloaded_headers.push_back(header);
+ m_redownload_buffer_last_height = next_height;
+ m_redownload_buffer_last_hash = header.GetHash();
+
+ return true;
+}
+
+std::vector<CBlockHeader> HeadersSyncState::PopHeadersReadyForAcceptance()
+{
+ std::vector<CBlockHeader> ret;
+
+ Assume(m_download_state == State::REDOWNLOAD);
+ if (m_download_state != State::REDOWNLOAD) return ret;
+
+ while (m_redownloaded_headers.size() > REDOWNLOAD_BUFFER_SIZE ||
+ (m_redownloaded_headers.size() > 0 && m_process_all_remaining_headers)) {
+ ret.emplace_back(m_redownloaded_headers.front().GetFullHeader(m_redownload_buffer_first_prev_hash));
+ m_redownloaded_headers.pop_front();
+ m_redownload_buffer_first_prev_hash = ret.back().GetHash();
+ }
+ return ret;
+}
+
+CBlockLocator HeadersSyncState::NextHeadersRequestLocator() const
+{
+ Assume(m_download_state != State::FINAL);
+ if (m_download_state == State::FINAL) return {};
+
+ auto chain_start_locator = LocatorEntries(m_chain_start);
+ std::vector<uint256> locator;
+
+ if (m_download_state == State::PRESYNC) {
+ // During pre-synchronization, we continue from the last header received.
+ locator.push_back(m_last_header_received.GetHash());
+ }
+
+ if (m_download_state == State::REDOWNLOAD) {
+ // During redownload, we will download from the last received header that we stored.
+ locator.push_back(m_redownload_buffer_last_hash);
+ }
+
+ locator.insert(locator.end(), chain_start_locator.begin(), chain_start_locator.end());
+
+ return CBlockLocator{std::move(locator)};
+}
diff --git a/src/headerssync.h b/src/headerssync.h
new file mode 100644
index 0000000000..16da964246
--- /dev/null
+++ b/src/headerssync.h
@@ -0,0 +1,277 @@
+// Copyright (c) 2022 The Bitcoin Core developers
+// Distributed under the MIT software license, see the accompanying
+// file COPYING or http://www.opensource.org/licenses/mit-license.php.
+
+#ifndef BITCOIN_HEADERSSYNC_H
+#define BITCOIN_HEADERSSYNC_H
+
+#include <arith_uint256.h>
+#include <chain.h>
+#include <consensus/params.h>
+#include <net.h> // For NodeId
+#include <primitives/block.h>
+#include <uint256.h>
+#include <util/bitdeque.h>
+#include <util/hasher.h>
+
+#include <deque>
+#include <vector>
+
+// A compressed CBlockHeader, which leaves out the prevhash
+struct CompressedHeader {
+ // header
+ int32_t nVersion{0};
+ uint256 hashMerkleRoot;
+ uint32_t nTime{0};
+ uint32_t nBits{0};
+ uint32_t nNonce{0};
+
+ CompressedHeader()
+ {
+ hashMerkleRoot.SetNull();
+ }
+
+ CompressedHeader(const CBlockHeader& header)
+ {
+ nVersion = header.nVersion;
+ hashMerkleRoot = header.hashMerkleRoot;
+ nTime = header.nTime;
+ nBits = header.nBits;
+ nNonce = header.nNonce;
+ }
+
+ CBlockHeader GetFullHeader(const uint256& hash_prev_block) {
+ CBlockHeader ret;
+ ret.nVersion = nVersion;
+ ret.hashPrevBlock = hash_prev_block;
+ ret.hashMerkleRoot = hashMerkleRoot;
+ ret.nTime = nTime;
+ ret.nBits = nBits;
+ ret.nNonce = nNonce;
+ return ret;
+ };
+};
+
+/** HeadersSyncState:
+ *
+ * We wish to download a peer's headers chain in a DoS-resistant way.
+ *
+ * The Bitcoin protocol does not offer an easy way to determine the work on a
+ * peer's chain. Currently, we can query a peer's headers by using a GETHEADERS
+ * message, and our peer can return a set of up to 2000 headers that connect to
+ * something we know. If a peer's chain has more than 2000 blocks, then we need
+ * a way to verify that the chain actually has enough work on it to be useful to
+ * us -- by being above our anti-DoS minimum-chain-work threshold -- before we
+ * commit to storing those headers in memory. Otherwise, it would be cheap for
+ * an attacker to waste all our memory by serving us low-work headers
+ * (particularly for a new node coming online for the first time).
+ *
+ * To prevent memory-DoS with low-work headers, while still always being
+ * able to reorg to whatever the most-work chain is, we require that a chain
+ * meet a work threshold before committing it to memory. We can do this by
+ * downloading a peer's headers twice, whenever we are not sure that the chain
+ * has sufficient work:
+ *
+ * - In the first download phase, called pre-synchronization, we can calculate
+ * the work on the chain as we go (just by checking the nBits value on each
+ * header, and validating the proof-of-work).
+ *
+ * - Once we have reached a header where the cumulative chain work is
+ * sufficient, we switch to downloading the headers a second time, this time
+ * processing them fully, and possibly storing them in memory.
+ *
+ * To prevent an attacker from using (eg) the honest chain to convince us that
+ * they have a high-work chain, but then feeding us an alternate set of
+ * low-difficulty headers in the second phase, we store commitments to the
+ * chain we see in the first download phase that we check in the second phase,
+ * as follows:
+ *
+ * - In phase 1 (presync), store 1 bit (using a salted hash function) for every
+ * N headers that we see. With a reasonable choice of N, this uses relatively
+ * little memory even for a very long chain.
+ *
+ * - In phase 2 (redownload), keep a lookahead buffer and only accept headers
+ * from that buffer into the block index (permanent memory usage) once they
+ * have some target number of verified commitments on top of them. With this
+ * parametrization, we can achieve a given security target for potential
+ * permanent memory usage, while choosing N to minimize memory use during the
+ * sync (temporary, per-peer storage).
+ */
+
+class HeadersSyncState {
+public:
+ ~HeadersSyncState() {}
+
+ enum class State {
+ /** PRESYNC means the peer has not yet demonstrated their chain has
+ * sufficient work and we're only building commitments to the chain they
+ * serve us. */
+ PRESYNC,
+ /** REDOWNLOAD means the peer has given us a high-enough-work chain,
+ * and now we're redownloading the headers we saw before and trying to
+ * accept them */
+ REDOWNLOAD,
+ /** We're done syncing with this peer and can discard any remaining state */
+ FINAL
+ };
+
+ /** Return the current state of our download */
+ State GetState() const { return m_download_state; }
+
+ /** Return the height reached during the PRESYNC phase */
+ int64_t GetPresyncHeight() const { return m_current_height; }
+
+ /** Return the block timestamp of the last header received during the PRESYNC phase. */
+ uint32_t GetPresyncTime() const { return m_last_header_received.nTime; }
+
+ /** Return the amount of work in the chain received during the PRESYNC phase. */
+ arith_uint256 GetPresyncWork() const { return m_current_chain_work; }
+
+ /** Construct a HeadersSyncState object representing a headers sync via this
+ * download-twice mechanism).
+ *
+ * id: node id (for logging)
+ * consensus_params: parameters needed for difficulty adjustment validation
+ * chain_start: best known fork point that the peer's headers branch from
+ * minimum_required_work: amount of chain work required to accept the chain
+ */
+ HeadersSyncState(NodeId id, const Consensus::Params& consensus_params,
+ const CBlockIndex* chain_start, const arith_uint256& minimum_required_work);
+
+ /** Result data structure for ProcessNextHeaders. */
+ struct ProcessingResult {
+ std::vector<CBlockHeader> pow_validated_headers;
+ bool success{false};
+ bool request_more{false};
+ };
+
+ /** Process a batch of headers, once a sync via this mechanism has started
+ *
+ * received_headers: headers that were received over the network for processing.
+ * Assumes the caller has already verified the headers
+ * are continuous, and has checked that each header
+ * satisfies the proof-of-work target included in the
+ * header (but not necessarily verified that the
+ * proof-of-work target is correct and passes consensus
+ * rules).
+ * full_headers_message: true if the message was at max capacity,
+ * indicating more headers may be available
+ * ProcessingResult.pow_validated_headers: will be filled in with any
+ * headers that the caller can fully process and
+ * validate now (because these returned headers are
+ * on a chain with sufficient work)
+ * ProcessingResult.success: set to false if an error is detected and the sync is
+ * aborted; true otherwise.
+ * ProcessingResult.request_more: if true, the caller is suggested to call
+ * NextHeadersRequestLocator and send a getheaders message using it.
+ */
+ ProcessingResult ProcessNextHeaders(const std::vector<CBlockHeader>&
+ received_headers, bool full_headers_message);
+
+ /** Issue the next GETHEADERS message to our peer.
+ *
+ * This will return a locator appropriate for the current sync object, to continue the
+ * synchronization phase it is in.
+ */
+ CBlockLocator NextHeadersRequestLocator() const;
+
+private:
+ /** Clear out all download state that might be in progress (freeing any used
+ * memory), and mark this object as no longer usable.
+ */
+ void Finalize();
+
+ /**
+ * Only called in PRESYNC.
+ * Validate the work on the headers we received from the network, and
+ * store commitments for later. Update overall state with successfully
+ * processed headers.
+ * On failure, this invokes Finalize() and returns false.
+ */
+ bool ValidateAndStoreHeadersCommitments(const std::vector<CBlockHeader>& headers);
+
+ /** In PRESYNC, process and update state for a single header */
+ bool ValidateAndProcessSingleHeader(const CBlockHeader& current);
+
+ /** In REDOWNLOAD, check a header's commitment (if applicable) and add to
+ * buffer for later processing */
+ bool ValidateAndStoreRedownloadedHeader(const CBlockHeader& header);
+
+ /** Return a set of headers that satisfy our proof-of-work threshold */
+ std::vector<CBlockHeader> PopHeadersReadyForAcceptance();
+
+private:
+ /** NodeId of the peer (used for log messages) **/
+ const NodeId m_id;
+
+ /** We use the consensus params in our anti-DoS calculations */
+ const Consensus::Params& m_consensus_params;
+
+ /** Store the last block in our block index that the peer's chain builds from */
+ const CBlockIndex* m_chain_start{nullptr};
+
+ /** Minimum work that we're looking for on this chain. */
+ const arith_uint256 m_minimum_required_work;
+
+ /** Work that we've seen so far on the peer's chain */
+ arith_uint256 m_current_chain_work;
+
+ /** m_hasher is a salted hasher for making our 1-bit commitments to headers we've seen. */
+ const SaltedTxidHasher m_hasher;
+
+ /** A queue of commitment bits, created during the 1st phase, and verified during the 2nd. */
+ bitdeque<> m_header_commitments;
+
+ /** The (secret) offset on the heights for which to create commitments.
+ *
+ * m_header_commitments entries are created at any height h for which
+ * (h % HEADER_COMMITMENT_PERIOD) == m_commit_offset. */
+ const unsigned m_commit_offset;
+
+ /** m_max_commitments is a bound we calculate on how long an honest peer's chain could be,
+ * given the MTP rule.
+ *
+ * Any peer giving us more headers than this will have its sync aborted. This serves as a
+ * memory bound on m_header_commitments. */
+ uint64_t m_max_commitments{0};
+
+ /** Store the latest header received while in PRESYNC (initialized to m_chain_start) */
+ CBlockHeader m_last_header_received;
+
+ /** Height of m_last_header_received */
+ int64_t m_current_height{0};
+
+ /** During phase 2 (REDOWNLOAD), we buffer redownloaded headers in memory
+ * until enough commitments have been verified; those are stored in
+ * m_redownloaded_headers */
+ std::deque<CompressedHeader> m_redownloaded_headers;
+
+ /** Height of last header in m_redownloaded_headers */
+ int64_t m_redownload_buffer_last_height{0};
+
+ /** Hash of last header in m_redownloaded_headers (initialized to
+ * m_chain_start). We have to cache it because we don't have hashPrevBlock
+ * available in a CompressedHeader.
+ */
+ uint256 m_redownload_buffer_last_hash;
+
+ /** The hashPrevBlock entry for the first header in m_redownloaded_headers
+ * We need this to reconstruct the full header when it's time for
+ * processing.
+ */
+ uint256 m_redownload_buffer_first_prev_hash;
+
+ /** The accumulated work on the redownloaded chain. */
+ arith_uint256 m_redownload_chain_work;
+
+ /** Set this to true once we encounter the target blockheader during phase
+ * 2 (REDOWNLOAD). At this point, we can process and store all remaining
+ * headers still in m_redownloaded_headers.
+ */
+ bool m_process_all_remaining_headers{false};
+
+ /** Current state of our headers sync. */
+ State m_download_state{State::PRESYNC};
+};
+
+#endif // BITCOIN_HEADERSSYNC_H
diff --git a/src/interfaces/node.h b/src/interfaces/node.h
index 2c31e12ada..dbdb21eb91 100644
--- a/src/interfaces/node.h
+++ b/src/interfaces/node.h
@@ -260,7 +260,7 @@ public:
//! Register handler for header tip messages.
using NotifyHeaderTipFn =
- std::function<void(SynchronizationState, interfaces::BlockTip tip, double verification_progress)>;
+ std::function<void(SynchronizationState, interfaces::BlockTip tip, bool presync)>;
virtual std::unique_ptr<Handler> handleNotifyHeaderTip(NotifyHeaderTipFn fn) = 0;
//! Get and set internal node context. Useful for testing, but not
diff --git a/src/logging.cpp b/src/logging.cpp
index 73c4e458bd..92fc31917f 100644
--- a/src/logging.cpp
+++ b/src/logging.cpp
@@ -165,6 +165,7 @@ const CLogCategoryDesc LogCategories[] =
#endif
{BCLog::UTIL, "util"},
{BCLog::BLOCKSTORE, "blockstorage"},
+ {BCLog::HEADERSSYNC, "headerssync"},
{BCLog::ALL, "1"},
{BCLog::ALL, "all"},
};
@@ -263,6 +264,8 @@ std::string LogCategoryToStr(BCLog::LogFlags category)
return "util";
case BCLog::LogFlags::BLOCKSTORE:
return "blockstorage";
+ case BCLog::LogFlags::HEADERSSYNC:
+ return "headerssync";
case BCLog::LogFlags::ALL:
return "all";
}
diff --git a/src/logging.h b/src/logging.h
index 50869ad89a..a7f18f7560 100644
--- a/src/logging.h
+++ b/src/logging.h
@@ -65,6 +65,7 @@ namespace BCLog {
#endif
UTIL = (1 << 25),
BLOCKSTORE = (1 << 26),
+ HEADERSSYNC = (1 << 27),
ALL = ~(uint32_t)0,
};
enum class Level {
diff --git a/src/net_processing.cpp b/src/net_processing.cpp
index fff7f86d78..746e3bedde 100644
--- a/src/net_processing.cpp
+++ b/src/net_processing.cpp
@@ -14,6 +14,7 @@
#include <consensus/validation.h>
#include <deploymentstatus.h>
#include <hash.h>
+#include <headerssync.h>
#include <index/blockfilterindex.h>
#include <merkleblock.h>
#include <netbase.h>
@@ -381,6 +382,15 @@ struct Peer {
/** Time of the last getheaders message to this peer */
NodeClock::time_point m_last_getheaders_timestamp{};
+ /** Protects m_headers_sync **/
+ Mutex m_headers_sync_mutex;
+ /** Headers-sync state for this peer (eg for initial sync, or syncing large
+ * reorgs) **/
+ std::unique_ptr<HeadersSyncState> m_headers_sync PT_GUARDED_BY(m_headers_sync_mutex) GUARDED_BY(m_headers_sync_mutex) {};
+
+ /** Whether we've sent our peer a sendheaders message. **/
+ std::atomic<bool> m_sent_sendheaders{false};
+
explicit Peer(NodeId id, ServiceFlags our_services)
: m_id{id}
, m_our_services{our_services}
@@ -503,9 +513,9 @@ public:
/** Implement NetEventsInterface */
void InitializeNode(CNode& node, ServiceFlags our_services) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
- void FinalizeNode(const CNode& node) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
+ void FinalizeNode(const CNode& node) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_headers_presync_mutex);
bool ProcessMessages(CNode* pfrom, std::atomic<bool>& interrupt) override
- EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex);
+ EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex, !m_headers_presync_mutex);
bool SendMessages(CNode* pto) override EXCLUSIVE_LOCKS_REQUIRED(pto->cs_sendProcessing)
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex);
@@ -522,7 +532,7 @@ public:
void UnitTestMisbehaving(NodeId peer_id, int howmuch) override EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex) { Misbehaving(*Assert(GetPeerRef(peer_id)), howmuch, ""); };
void ProcessMessage(CNode& pfrom, const std::string& msg_type, CDataStream& vRecv,
const std::chrono::microseconds time_received, const std::atomic<bool>& interruptMsgProc) override
- EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex);
+ EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_recent_confirmed_transactions_mutex, !m_most_recent_block_mutex, !m_headers_presync_mutex);
void UpdateLastBlockAnnounceTime(NodeId node, int64_t time_in_seconds) override;
private:
@@ -581,18 +591,70 @@ private:
void ProcessOrphanTx(std::set<uint256>& orphan_work_set) EXCLUSIVE_LOCKS_REQUIRED(cs_main, g_cs_orphans)
EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
- /** Process a single headers message from a peer. */
+ /** Process a single headers message from a peer.
+ *
+ * @param[in] pfrom CNode of the peer
+ * @param[in] peer The peer sending us the headers
+ * @param[in] headers The headers received. Note that this may be modified within ProcessHeadersMessage.
+ * @param[in] via_compact_block Whether this header came in via compact block handling.
+ */
void ProcessHeadersMessage(CNode& pfrom, Peer& peer,
- const std::vector<CBlockHeader>& headers,
+ std::vector<CBlockHeader>&& headers,
bool via_compact_block)
- EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex);
+ EXCLUSIVE_LOCKS_REQUIRED(!m_peer_mutex, !m_headers_presync_mutex);
/** Various helpers for headers processing, invoked by ProcessHeadersMessage() */
+ /** Return true if headers are continuous and have valid proof-of-work (DoS points assigned on failure) */
+ bool CheckHeadersPoW(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams, Peer& peer);
+ /** Calculate an anti-DoS work threshold for headers chains */
+ arith_uint256 GetAntiDoSWorkThreshold();
/** Deal with state tracking and headers sync for peers that send the
* occasional non-connecting header (this can happen due to BIP 130 headers
* announcements for blocks interacting with the 2hr (MAX_FUTURE_BLOCK_TIME) rule). */
void HandleFewUnconnectingHeaders(CNode& pfrom, Peer& peer, const std::vector<CBlockHeader>& headers);
/** Return true if the headers connect to each other, false otherwise */
bool CheckHeadersAreContinuous(const std::vector<CBlockHeader>& headers) const;
+ /** Try to continue a low-work headers sync that has already begun.
+ * Assumes the caller has already verified the headers connect, and has
+ * checked that each header satisfies the proof-of-work target included in
+ * the header.
+ * @param[in] peer The peer we're syncing with.
+ * @param[in] pfrom CNode of the peer
+ * @param[in,out] headers The headers to be processed.
+ * @return True if the passed in headers were successfully processed
+ * as the continuation of a low-work headers sync in progress;
+ * false otherwise.
+ * If false, the passed in headers will be returned back to
+ * the caller.
+ * If true, the returned headers may be empty, indicating
+ * there is no more work for the caller to do; or the headers
+ * may be populated with entries that have passed anti-DoS
+ * checks (and therefore may be validated for block index
+ * acceptance by the caller).
+ */
+ bool IsContinuationOfLowWorkHeadersSync(Peer& peer, CNode& pfrom,
+ std::vector<CBlockHeader>& headers)
+ EXCLUSIVE_LOCKS_REQUIRED(peer.m_headers_sync_mutex, !m_headers_presync_mutex);
+ /** Check work on a headers chain to be processed, and if insufficient,
+ * initiate our anti-DoS headers sync mechanism.
+ *
+ * @param[in] peer The peer whose headers we're processing.
+ * @param[in] pfrom CNode of the peer
+ * @param[in] chain_start_header Where these headers connect in our index.
+ * @param[in,out] headers The headers to be processed.
+ *
+ * @return True if chain was low work and a headers sync was
+ * initiated (and headers will be empty after calling); false
+ * otherwise.
+ */
+ bool TryLowWorkHeadersSync(Peer& peer, CNode& pfrom,
+ const CBlockIndex* chain_start_header,
+ std::vector<CBlockHeader>& headers)
+ EXCLUSIVE_LOCKS_REQUIRED(!peer.m_headers_sync_mutex, !m_peer_mutex, !m_headers_presync_mutex);
+
+ /** Return true if the given header is an ancestor of
+ * m_chainman.m_best_header or our current tip */
+ bool IsAncestorOfBestHeaderOrTip(const CBlockIndex* header) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
+
/** Request further headers from this peer with a given locator.
* We don't issue a getheaders message if we have a recent one outstanding.
* This returns true if a getheaders is actually sent, and false otherwise.
@@ -623,6 +685,9 @@ private:
/** Send `addr` messages on a regular schedule. */
void MaybeSendAddr(CNode& node, Peer& peer, std::chrono::microseconds current_time);
+ /** Send a single `sendheaders` message, after we have completed headers sync with a peer. */
+ void MaybeSendSendHeaders(CNode& node, Peer& peer);
+
/** Relay (gossip) an address to a few randomly chosen nodes.
*
* @param[in] originator The id of the peer that sent us the address. We don't want to relay it back.
@@ -779,6 +844,24 @@ private:
std::shared_ptr<const CBlockHeaderAndShortTxIDs> m_most_recent_compact_block GUARDED_BY(m_most_recent_block_mutex);
uint256 m_most_recent_block_hash GUARDED_BY(m_most_recent_block_mutex);
+ // Data about the low-work headers synchronization, aggregated from all peers' HeadersSyncStates.
+ /** Mutex guarding the other m_headers_presync_* variables. */
+ Mutex m_headers_presync_mutex;
+ /** A type to represent statistics about a peer's low-work headers sync.
+ *
+ * - The first field is the total verified amount of work in that synchronization.
+ * - The second is:
+ * - nullopt: the sync is in REDOWNLOAD phase (phase 2).
+ * - {height, timestamp}: the sync has the specified tip height and block timestamp (phase 1).
+ */
+ using HeadersPresyncStats = std::pair<arith_uint256, std::optional<std::pair<int64_t, uint32_t>>>;
+ /** Statistics for all peers in low-work headers sync. */
+ std::map<NodeId, HeadersPresyncStats> m_headers_presync_stats GUARDED_BY(m_headers_presync_mutex) {};
+ /** The peer with the most-work entry in m_headers_presync_stats. */
+ NodeId m_headers_presync_bestpeer GUARDED_BY(m_headers_presync_mutex) {-1};
+ /** The m_headers_presync_stats improved, and needs signalling. */
+ std::atomic_bool m_headers_presync_should_signal{false};
+
/** Height of the highest block announced using BIP 152 high-bandwidth mode. */
int m_highest_fast_announce{0};
@@ -816,7 +899,7 @@ private:
EXCLUSIVE_LOCKS_REQUIRED(!m_most_recent_block_mutex, peer.m_getdata_requests_mutex) LOCKS_EXCLUDED(::cs_main);
/** Process a new block. Perform any post-processing housekeeping */
- void ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing);
+ void ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked);
/** Relay map (txid or wtxid -> CTransactionRef) */
typedef std::map<uint256, CTransactionRef> MapRelay;
@@ -1437,6 +1520,10 @@ void PeerManagerImpl::FinalizeNode(const CNode& node)
// fSuccessfullyConnected set.
m_addrman.Connected(node.addr);
}
+ {
+ LOCK(m_headers_presync_mutex);
+ m_headers_presync_stats.erase(nodeid);
+ }
LogPrint(BCLog::NET, "Cleared nodestate for peer=%d\n", nodeid);
}
@@ -1501,6 +1588,12 @@ bool PeerManagerImpl::GetNodeStateStats(NodeId nodeid, CNodeStateStats& stats) c
stats.m_addr_processed = peer->m_addr_processed.load();
stats.m_addr_rate_limited = peer->m_addr_rate_limited.load();
stats.m_addr_relay_enabled = peer->m_addr_relay_enabled.load();
+ {
+ LOCK(peer->m_headers_sync_mutex);
+ if (peer->m_headers_sync) {
+ stats.presync_height = peer->m_headers_sync->GetPresyncHeight();
+ }
+ }
return true;
}
@@ -1544,6 +1637,10 @@ bool PeerManagerImpl::MaybePunishNodeForBlock(NodeId nodeid, const BlockValidati
switch (state.GetResult()) {
case BlockValidationResult::BLOCK_RESULT_UNSET:
break;
+ case BlockValidationResult::BLOCK_HEADER_LOW_WORK:
+ // We didn't try to process the block because the header chain may have
+ // too little work.
+ break;
// The node is providing invalid data:
case BlockValidationResult::BLOCK_CONSENSUS:
case BlockValidationResult::BLOCK_MUTATED:
@@ -2263,6 +2360,35 @@ void PeerManagerImpl::SendBlockTransactions(CNode& pfrom, Peer& peer, const CBlo
m_connman.PushMessage(&pfrom, msgMaker.Make(NetMsgType::BLOCKTXN, resp));
}
+bool PeerManagerImpl::CheckHeadersPoW(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams, Peer& peer)
+{
+ // Do these headers have proof-of-work matching what's claimed?
+ if (!HasValidProofOfWork(headers, consensusParams)) {
+ Misbehaving(peer, 100, "header with invalid proof of work");
+ return false;
+ }
+
+ // Are these headers connected to each other?
+ if (!CheckHeadersAreContinuous(headers)) {
+ Misbehaving(peer, 20, "non-continuous headers sequence");
+ return false;
+ }
+ return true;
+}
+
+arith_uint256 PeerManagerImpl::GetAntiDoSWorkThreshold()
+{
+ arith_uint256 near_chaintip_work = 0;
+ LOCK(cs_main);
+ if (m_chainman.ActiveChain().Tip() != nullptr) {
+ const CBlockIndex *tip = m_chainman.ActiveChain().Tip();
+ // Use a 144 block buffer, so that we'll accept headers that fork from
+ // near our tip.
+ near_chaintip_work = tip->nChainWork - std::min<arith_uint256>(144*GetBlockProof(*tip), tip->nChainWork);
+ }
+ return std::max(near_chaintip_work, arith_uint256(nMinimumChainWork));
+}
+
/**
* Special handling for unconnecting headers that might be part of a block
* announcement.
@@ -2285,7 +2411,7 @@ void PeerManagerImpl::HandleFewUnconnectingHeaders(CNode& pfrom, Peer& peer,
nodestate->nUnconnectingHeaders++;
// Try to fill in the missing headers.
- if (MaybeSendGetHeaders(pfrom, m_chainman.ActiveChain().GetLocator(m_chainman.m_best_header), peer)) {
+ if (MaybeSendGetHeaders(pfrom, GetLocator(m_chainman.m_best_header), peer)) {
LogPrint(BCLog::NET, "received header %s: missing prev block %s, sending getheaders (%d) to end (peer=%d, nUnconnectingHeaders=%d)\n",
headers[0].GetHash().ToString(),
headers[0].hashPrevBlock.ToString(),
@@ -2316,6 +2442,146 @@ bool PeerManagerImpl::CheckHeadersAreContinuous(const std::vector<CBlockHeader>&
return true;
}
+bool PeerManagerImpl::IsContinuationOfLowWorkHeadersSync(Peer& peer, CNode& pfrom, std::vector<CBlockHeader>& headers)
+{
+ if (peer.m_headers_sync) {
+ auto result = peer.m_headers_sync->ProcessNextHeaders(headers, headers.size() == MAX_HEADERS_RESULTS);
+ if (result.request_more) {
+ auto locator = peer.m_headers_sync->NextHeadersRequestLocator();
+ // If we were instructed to ask for a locator, it should not be empty.
+ Assume(!locator.vHave.empty());
+ if (!locator.vHave.empty()) {
+ // It should be impossible for the getheaders request to fail,
+ // because we should have cleared the last getheaders timestamp
+ // when processing the headers that triggered this call. But
+ // it may be possible to bypass this via compactblock
+ // processing, so check the result before logging just to be
+ // safe.
+ bool sent_getheaders = MaybeSendGetHeaders(pfrom, locator, peer);
+ if (sent_getheaders) {
+ LogPrint(BCLog::NET, "more getheaders (from %s) to peer=%d\n",
+ locator.vHave.front().ToString(), pfrom.GetId());
+ } else {
+ LogPrint(BCLog::NET, "error sending next getheaders (from %s) to continue sync with peer=%d\n",
+ locator.vHave.front().ToString(), pfrom.GetId());
+ }
+ }
+ }
+
+ if (peer.m_headers_sync->GetState() == HeadersSyncState::State::FINAL) {
+ peer.m_headers_sync.reset(nullptr);
+
+ // Delete this peer's entry in m_headers_presync_stats.
+ // If this is m_headers_presync_bestpeer, it will be replaced later
+ // by the next peer that triggers the else{} branch below.
+ LOCK(m_headers_presync_mutex);
+ m_headers_presync_stats.erase(pfrom.GetId());
+ } else {
+ // Build statistics for this peer's sync.
+ HeadersPresyncStats stats;
+ stats.first = peer.m_headers_sync->GetPresyncWork();
+ if (peer.m_headers_sync->GetState() == HeadersSyncState::State::PRESYNC) {
+ stats.second = {peer.m_headers_sync->GetPresyncHeight(),
+ peer.m_headers_sync->GetPresyncTime()};
+ }
+
+ // Update statistics in stats.
+ LOCK(m_headers_presync_mutex);
+ m_headers_presync_stats[pfrom.GetId()] = stats;
+ auto best_it = m_headers_presync_stats.find(m_headers_presync_bestpeer);
+ bool best_updated = false;
+ if (best_it == m_headers_presync_stats.end()) {
+ // If the cached best peer is outdated, iterate over all remaining ones (including
+ // newly updated one) to find the best one.
+ NodeId peer_best{-1};
+ const HeadersPresyncStats* stat_best{nullptr};
+ for (const auto& [peer, stat] : m_headers_presync_stats) {
+ if (!stat_best || stat > *stat_best) {
+ peer_best = peer;
+ stat_best = &stat;
+ }
+ }
+ m_headers_presync_bestpeer = peer_best;
+ best_updated = (peer_best == pfrom.GetId());
+ } else if (best_it->first == pfrom.GetId() || stats > best_it->second) {
+ // pfrom was and remains the best peer, or pfrom just became best.
+ m_headers_presync_bestpeer = pfrom.GetId();
+ best_updated = true;
+ }
+ if (best_updated && stats.second.has_value()) {
+ // If the best peer updated, and it is in its first phase, signal.
+ m_headers_presync_should_signal = true;
+ }
+ }
+
+ if (result.success) {
+ // We only overwrite the headers passed in if processing was
+ // successful.
+ headers.swap(result.pow_validated_headers);
+ }
+
+ return result.success;
+ }
+ // Either we didn't have a sync in progress, or something went wrong
+ // processing these headers, or we are returning headers to the caller to
+ // process.
+ return false;
+}
+
+bool PeerManagerImpl::TryLowWorkHeadersSync(Peer& peer, CNode& pfrom, const CBlockIndex* chain_start_header, std::vector<CBlockHeader>& headers)
+{
+ // Calculate the total work on this chain.
+ arith_uint256 total_work = chain_start_header->nChainWork + CalculateHeadersWork(headers);
+
+ // Our dynamic anti-DoS threshold (minimum work required on a headers chain
+ // before we'll store it)
+ arith_uint256 minimum_chain_work = GetAntiDoSWorkThreshold();
+
+ // Avoid DoS via low-difficulty-headers by only processing if the headers
+ // are part of a chain with sufficient work.
+ if (total_work < minimum_chain_work) {
+ // Only try to sync with this peer if their headers message was full;
+ // otherwise they don't have more headers after this so no point in
+ // trying to sync their too-little-work chain.
+ if (headers.size() == MAX_HEADERS_RESULTS) {
+ // Note: we could advance to the last header in this set that is
+ // known to us, rather than starting at the first header (which we
+ // may already have); however this is unlikely to matter much since
+ // ProcessHeadersMessage() already handles the case where all
+ // headers in a received message are already known and are
+ // ancestors of m_best_header or chainActive.Tip(), by skipping
+ // this logic in that case. So even if the first header in this set
+ // of headers is known, some header in this set must be new, so
+ // advancing to the first unknown header would be a small effect.
+ LOCK(peer.m_headers_sync_mutex);
+ peer.m_headers_sync.reset(new HeadersSyncState(peer.m_id, m_chainparams.GetConsensus(),
+ chain_start_header, minimum_chain_work));
+
+ // Now a HeadersSyncState object for tracking this synchronization is created,
+ // process the headers using it as normal.
+ return IsContinuationOfLowWorkHeadersSync(peer, pfrom, headers);
+ } else {
+ LogPrint(BCLog::NET, "Ignoring low-work chain (height=%u) from peer=%d\n", chain_start_header->nHeight + headers.size(), pfrom.GetId());
+ // Since this is a low-work headers chain, no further processing is required.
+ headers = {};
+ return true;
+ }
+ }
+ return false;
+}
+
+bool PeerManagerImpl::IsAncestorOfBestHeaderOrTip(const CBlockIndex* header)
+{
+ if (header == nullptr) {
+ return false;
+ } else if (m_chainman.m_best_header != nullptr && header == m_chainman.m_best_header->GetAncestor(header->nHeight)) {
+ return true;
+ } else if (m_chainman.ActiveChain().Contains(header)) {
+ return true;
+ }
+ return false;
+}
+
bool PeerManagerImpl::MaybeSendGetHeaders(CNode& pfrom, const CBlockLocator& locator, Peer& peer)
{
const CNetMsgMaker msgMaker(pfrom.GetCommonVersion());
@@ -2461,21 +2727,73 @@ void PeerManagerImpl::UpdatePeerStateForReceivedHeaders(CNode& pfrom,
}
void PeerManagerImpl::ProcessHeadersMessage(CNode& pfrom, Peer& peer,
- const std::vector<CBlockHeader>& headers,
+ std::vector<CBlockHeader>&& headers,
bool via_compact_block)
{
- const CNetMsgMaker msgMaker(pfrom.GetCommonVersion());
size_t nCount = headers.size();
if (nCount == 0) {
// Nothing interesting. Stop asking this peers for more headers.
+ // If we were in the middle of headers sync, receiving an empty headers
+ // message suggests that the peer suddenly has nothing to give us
+ // (perhaps it reorged to our chain). Clear download state for this peer.
+ LOCK(peer.m_headers_sync_mutex);
+ if (peer.m_headers_sync) {
+ peer.m_headers_sync.reset(nullptr);
+ LOCK(m_headers_presync_mutex);
+ m_headers_presync_stats.erase(pfrom.GetId());
+ }
+ return;
+ }
+
+ // Before we do any processing, make sure these pass basic sanity checks.
+ // We'll rely on headers having valid proof-of-work further down, as an
+ // anti-DoS criteria (note: this check is required before passing any
+ // headers into HeadersSyncState).
+ if (!CheckHeadersPoW(headers, m_chainparams.GetConsensus(), peer)) {
+ // Misbehaving() calls are handled within CheckHeadersPoW(), so we can
+ // just return. (Note that even if a header is announced via compact
+ // block, the header itself should be valid, so this type of error can
+ // always be punished.)
return;
}
const CBlockIndex *pindexLast = nullptr;
+ // We'll set already_validated_work to true if these headers are
+ // successfully processed as part of a low-work headers sync in progress
+ // (either in PRESYNC or REDOWNLOAD phase).
+ // If true, this will mean that any headers returned to us (ie during
+ // REDOWNLOAD) can be validated without further anti-DoS checks.
+ bool already_validated_work = false;
+
+ // If we're in the middle of headers sync, let it do its magic.
+ bool have_headers_sync = false;
+ {
+ LOCK(peer.m_headers_sync_mutex);
+
+ already_validated_work = IsContinuationOfLowWorkHeadersSync(peer, pfrom, headers);
+
+ // The headers we passed in may have been:
+ // - untouched, perhaps if no headers-sync was in progress, or some
+ // failure occurred
+ // - erased, such as if the headers were successfully processed and no
+ // additional headers processing needs to take place (such as if we
+ // are still in PRESYNC)
+ // - replaced with headers that are now ready for validation, such as
+ // during the REDOWNLOAD phase of a low-work headers sync.
+ // So just check whether we still have headers that we need to process,
+ // or not.
+ if (headers.empty()) {
+ return;
+ }
+
+ have_headers_sync = !!peer.m_headers_sync;
+ }
+
// Do these headers connect to something in our block index?
- bool headers_connect_blockindex{WITH_LOCK(::cs_main, return m_chainman.m_blockman.LookupBlockIndex(headers[0].hashPrevBlock) != nullptr)};
+ const CBlockIndex *chain_start_header{WITH_LOCK(::cs_main, return m_chainman.m_blockman.LookupBlockIndex(headers[0].hashPrevBlock))};
+ bool headers_connect_blockindex{chain_start_header != nullptr};
if (!headers_connect_blockindex) {
if (nCount <= MAX_BLOCKS_TO_ANNOUNCE) {
@@ -2489,28 +2807,51 @@ void PeerManagerImpl::ProcessHeadersMessage(CNode& pfrom, Peer& peer,
return;
}
+ // If the headers we received are already in memory and an ancestor of
+ // m_best_header or our tip, skip anti-DoS checks. These headers will not
+ // use any more memory (and we are not leaking information that could be
+ // used to fingerprint us).
+ const CBlockIndex *last_received_header{nullptr};
+ {
+ LOCK(cs_main);
+ last_received_header = m_chainman.m_blockman.LookupBlockIndex(headers.back().GetHash());
+ if (IsAncestorOfBestHeaderOrTip(last_received_header)) {
+ already_validated_work = true;
+ }
+ }
+
// At this point, the headers connect to something in our block index.
- if (!CheckHeadersAreContinuous(headers)) {
- Misbehaving(peer, 20, "non-continuous headers sequence");
+ // Do anti-DoS checks to determine if we should process or store for later
+ // processing.
+ if (!already_validated_work && TryLowWorkHeadersSync(peer, pfrom,
+ chain_start_header, headers)) {
+ // If we successfully started a low-work headers sync, then there
+ // should be no headers to process any further.
+ Assume(headers.empty());
return;
}
+ // At this point, we have a set of headers with sufficient work on them
+ // which can be processed.
+
// If we don't have the last header, then this peer will have given us
// something new (if these headers are valid).
- bool received_new_header{WITH_LOCK(::cs_main, return m_chainman.m_blockman.LookupBlockIndex(headers.back().GetHash()) == nullptr)};
+ bool received_new_header{last_received_header != nullptr};
+ // Now process all the headers.
BlockValidationState state;
- if (!m_chainman.ProcessNewBlockHeaders(headers, state, &pindexLast)) {
+ if (!m_chainman.ProcessNewBlockHeaders(headers, /*min_pow_checked=*/true, state, &pindexLast)) {
if (state.IsInvalid()) {
MaybePunishNodeForBlock(pfrom.GetId(), state, via_compact_block, "invalid header received");
return;
}
}
+ Assume(pindexLast);
- // Consider fetching more headers.
- if (nCount == MAX_HEADERS_RESULTS) {
+ // Consider fetching more headers if we are not using our headers-sync mechanism.
+ if (nCount == MAX_HEADERS_RESULTS && !have_headers_sync) {
// Headers message had its maximum size; the peer may have more headers.
- if (MaybeSendGetHeaders(pfrom, WITH_LOCK(m_chainman.GetMutex(), return m_chainman.ActiveChain().GetLocator(pindexLast)), peer)) {
+ if (MaybeSendGetHeaders(pfrom, GetLocator(pindexLast), peer)) {
LogPrint(BCLog::NET, "more getheaders (%d) to end to peer=%d (startheight:%d)\n",
pindexLast->nHeight, pfrom.GetId(), peer.m_starting_height);
}
@@ -2771,10 +3112,10 @@ void PeerManagerImpl::ProcessGetCFCheckPt(CNode& node, Peer& peer, CDataStream&
m_connman.PushMessage(&node, std::move(msg));
}
-void PeerManagerImpl::ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing)
+void PeerManagerImpl::ProcessBlock(CNode& node, const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked)
{
bool new_block{false};
- m_chainman.ProcessNewBlock(block, force_processing, &new_block);
+ m_chainman.ProcessNewBlock(block, force_processing, min_pow_checked, &new_block);
if (new_block) {
node.m_last_block_time = GetTime<std::chrono::seconds>();
} else {
@@ -3032,13 +3373,6 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
pfrom.ConnectionTypeAsString());
}
- if (pfrom.GetCommonVersion() >= SENDHEADERS_VERSION) {
- // Tell our peer we prefer to receive headers rather than inv's
- // We send this to non-NODE NETWORK peers as well, because even
- // non-NODE NETWORK peers can announce blocks (such as pruning
- // nodes)
- m_connman.PushMessage(&pfrom, msgMaker.Make(NetMsgType::SENDHEADERS));
- }
if (pfrom.GetCommonVersion() >= SHORT_IDS_BLOCKS_VERSION) {
// Tell our peer we are willing to provide version 2 cmpctblocks.
// However, we do not request new block announcements using
@@ -3285,7 +3619,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// use if we turned on sync with all peers).
CNodeState& state{*Assert(State(pfrom.GetId()))};
if (state.fSyncStarted || (!peer->m_inv_triggered_getheaders_before_sync && *best_block != m_last_block_inv_triggering_headers_sync)) {
- if (MaybeSendGetHeaders(pfrom, m_chainman.ActiveChain().GetLocator(m_chainman.m_best_header), *peer)) {
+ if (MaybeSendGetHeaders(pfrom, GetLocator(m_chainman.m_best_header), *peer)) {
LogPrint(BCLog::NET, "getheaders (%d) %s to peer=%d\n",
m_chainman.m_best_header->nHeight, best_block->ToString(),
pfrom.GetId());
@@ -3749,12 +4083,17 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
{
LOCK(cs_main);
- if (!m_chainman.m_blockman.LookupBlockIndex(cmpctblock.header.hashPrevBlock)) {
+ const CBlockIndex* prev_block = m_chainman.m_blockman.LookupBlockIndex(cmpctblock.header.hashPrevBlock);
+ if (!prev_block) {
// Doesn't connect (or is genesis), instead of DoSing in AcceptBlockHeader, request deeper headers
if (!m_chainman.ActiveChainstate().IsInitialBlockDownload()) {
- MaybeSendGetHeaders(pfrom, m_chainman.ActiveChain().GetLocator(m_chainman.m_best_header), *peer);
+ MaybeSendGetHeaders(pfrom, GetLocator(m_chainman.m_best_header), *peer);
}
return;
+ } else if (prev_block->nChainWork + CalculateHeadersWork({cmpctblock.header}) < GetAntiDoSWorkThreshold()) {
+ // If we get a low-work header in a compact block, we can ignore it.
+ LogPrint(BCLog::NET, "Ignoring low-work compact block from peer %d\n", pfrom.GetId());
+ return;
}
if (!m_chainman.m_blockman.LookupBlockIndex(cmpctblock.header.GetHash())) {
@@ -3764,7 +4103,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
const CBlockIndex *pindex = nullptr;
BlockValidationState state;
- if (!m_chainman.ProcessNewBlockHeaders({cmpctblock.header}, state, &pindex)) {
+ if (!m_chainman.ProcessNewBlockHeaders({cmpctblock.header}, /*min_pow_checked=*/true, state, &pindex)) {
if (state.IsInvalid()) {
MaybePunishNodeForBlock(pfrom.GetId(), state, /*via_compact_block=*/true, "invalid header via cmpctblock");
return;
@@ -3931,7 +4270,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// we have a chain with at least nMinimumChainWork), and we ignore
// compact blocks with less work than our tip, it is safe to treat
// reconstructed compact blocks as having been requested.
- ProcessBlock(pfrom, pblock, /*force_processing=*/true);
+ ProcessBlock(pfrom, pblock, /*force_processing=*/true, /*min_pow_checked=*/true);
LOCK(cs_main); // hold cs_main for CBlockIndex::IsValid()
if (pindex->IsValid(BLOCK_VALID_TRANSACTIONS)) {
// Clear download state for this block, which is in
@@ -4014,7 +4353,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// disk-space attacks), but this should be safe due to the
// protections in the compact block handler -- see related comment
// in compact block optimistic reconstruction handling.
- ProcessBlock(pfrom, pblock, /*force_processing=*/true);
+ ProcessBlock(pfrom, pblock, /*force_processing=*/true, /*min_pow_checked=*/true);
}
return;
}
@@ -4045,7 +4384,23 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
ReadCompactSize(vRecv); // ignore tx count; assume it is 0.
}
- return ProcessHeadersMessage(pfrom, *peer, headers, /*via_compact_block=*/false);
+ ProcessHeadersMessage(pfrom, *peer, std::move(headers), /*via_compact_block=*/false);
+
+ // Check if the headers presync progress needs to be reported to validation.
+ // This needs to be done without holding the m_headers_presync_mutex lock.
+ if (m_headers_presync_should_signal.exchange(false)) {
+ HeadersPresyncStats stats;
+ {
+ LOCK(m_headers_presync_mutex);
+ auto it = m_headers_presync_stats.find(m_headers_presync_bestpeer);
+ if (it != m_headers_presync_stats.end()) stats = it->second;
+ }
+ if (stats.second) {
+ m_chainman.ReportHeadersPresync(stats.first, stats.second->first, stats.second->second);
+ }
+ }
+
+ return;
}
if (msg_type == NetMsgType::BLOCK)
@@ -4063,6 +4418,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
bool forceProcessing = false;
const uint256 hash(pblock->GetHash());
+ bool min_pow_checked = false;
{
LOCK(cs_main);
// Always process the block if we requested it, since we may
@@ -4073,8 +4429,14 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// which peers send us compact blocks, so the race between here and
// cs_main in ProcessNewBlock is fine.
mapBlockSource.emplace(hash, std::make_pair(pfrom.GetId(), true));
+
+ // Check work on this block against our anti-dos thresholds.
+ const CBlockIndex* prev_block = m_chainman.m_blockman.LookupBlockIndex(pblock->hashPrevBlock);
+ if (prev_block && prev_block->nChainWork + CalculateHeadersWork({pblock->GetBlockHeader()}) >= GetAntiDoSWorkThreshold()) {
+ min_pow_checked = true;
+ }
}
- ProcessBlock(pfrom, pblock, forceProcessing);
+ ProcessBlock(pfrom, pblock, forceProcessing, min_pow_checked);
return;
}
@@ -4502,7 +4864,7 @@ void PeerManagerImpl::ConsiderEviction(CNode& pto, Peer& peer, std::chrono::seco
// getheaders in-flight already, in which case the peer should
// still respond to us with a sufficiently high work chain tip.
MaybeSendGetHeaders(pto,
- m_chainman.ActiveChain().GetLocator(state.m_chain_sync.m_work_header->pprev),
+ GetLocator(state.m_chain_sync.m_work_header->pprev),
peer);
LogPrint(BCLog::NET, "sending getheaders to outbound peer=%d to verify chain work (current best known block:%s, benchmark blockhash: %s)\n", pto.GetId(), state.pindexBestKnownBlock != nullptr ? state.pindexBestKnownBlock->GetBlockHash().ToString() : "<none>", state.m_chain_sync.m_work_header->GetBlockHash().ToString());
state.m_chain_sync.m_sent_getheaders = true;
@@ -4759,6 +5121,27 @@ void PeerManagerImpl::MaybeSendAddr(CNode& node, Peer& peer, std::chrono::micros
}
}
+void PeerManagerImpl::MaybeSendSendHeaders(CNode& node, Peer& peer)
+{
+ // Delay sending SENDHEADERS (BIP 130) until we're done with an
+ // initial-headers-sync with this peer. Receiving headers announcements for
+ // new blocks while trying to sync their headers chain is problematic,
+ // because of the state tracking done.
+ if (!peer.m_sent_sendheaders && node.GetCommonVersion() >= SENDHEADERS_VERSION) {
+ LOCK(cs_main);
+ CNodeState &state = *State(node.GetId());
+ if (state.pindexBestKnownBlock != nullptr &&
+ state.pindexBestKnownBlock->nChainWork > nMinimumChainWork) {
+ // Tell our peer we prefer to receive headers rather than inv's
+ // We send this to non-NODE NETWORK peers as well, because even
+ // non-NODE NETWORK peers can announce blocks (such as pruning
+ // nodes)
+ m_connman.PushMessage(&node, CNetMsgMaker(node.GetCommonVersion()).Make(NetMsgType::SENDHEADERS));
+ peer.m_sent_sendheaders = true;
+ }
+ }
+}
+
void PeerManagerImpl::MaybeSendFeefilter(CNode& pto, Peer& peer, std::chrono::microseconds current_time)
{
if (m_ignore_incoming_txs) return;
@@ -4880,6 +5263,8 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
MaybeSendAddr(*pto, *peer, current_time);
+ MaybeSendSendHeaders(*pto, *peer);
+
{
LOCK(cs_main);
@@ -4924,7 +5309,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
got back an empty response. */
if (pindexStart->pprev)
pindexStart = pindexStart->pprev;
- if (MaybeSendGetHeaders(*pto, m_chainman.ActiveChain().GetLocator(pindexStart), *peer)) {
+ if (MaybeSendGetHeaders(*pto, GetLocator(pindexStart), *peer)) {
LogPrint(BCLog::NET, "initial getheaders (%d) to peer=%d (startheight:%d)\n", pindexStart->nHeight, pto->GetId(), peer->m_starting_height);
state.fSyncStarted = true;
diff --git a/src/net_processing.h b/src/net_processing.h
index bcda9614d4..0a882b1e53 100644
--- a/src/net_processing.h
+++ b/src/net_processing.h
@@ -35,6 +35,7 @@ struct CNodeStateStats {
uint64_t m_addr_rate_limited = 0;
bool m_addr_relay_enabled{false};
ServiceFlags their_services;
+ int64_t presync_height{-1};
};
class PeerManager : public CValidationInterface, public NetEventsInterface
diff --git a/src/node/interface_ui.cpp b/src/node/interface_ui.cpp
index 370cde84f8..fa90d6fda7 100644
--- a/src/node/interface_ui.cpp
+++ b/src/node/interface_ui.cpp
@@ -53,7 +53,7 @@ void CClientUIInterface::NotifyNetworkActiveChanged(bool networkActive) { return
void CClientUIInterface::NotifyAlertChanged() { return g_ui_signals.NotifyAlertChanged(); }
void CClientUIInterface::ShowProgress(const std::string& title, int nProgress, bool resume_possible) { return g_ui_signals.ShowProgress(title, nProgress, resume_possible); }
void CClientUIInterface::NotifyBlockTip(SynchronizationState s, const CBlockIndex* i) { return g_ui_signals.NotifyBlockTip(s, i); }
-void CClientUIInterface::NotifyHeaderTip(SynchronizationState s, const CBlockIndex* i) { return g_ui_signals.NotifyHeaderTip(s, i); }
+void CClientUIInterface::NotifyHeaderTip(SynchronizationState s, int64_t height, int64_t timestamp, bool presync) { return g_ui_signals.NotifyHeaderTip(s, height, timestamp, presync); }
void CClientUIInterface::BannedListChanged() { return g_ui_signals.BannedListChanged(); }
bool InitError(const bilingual_str& str)
diff --git a/src/node/interface_ui.h b/src/node/interface_ui.h
index 37c0f6392b..316d75167e 100644
--- a/src/node/interface_ui.h
+++ b/src/node/interface_ui.h
@@ -105,7 +105,7 @@ public:
ADD_SIGNALS_DECL_WRAPPER(NotifyBlockTip, void, SynchronizationState, const CBlockIndex*);
/** Best header has changed */
- ADD_SIGNALS_DECL_WRAPPER(NotifyHeaderTip, void, SynchronizationState, const CBlockIndex*);
+ ADD_SIGNALS_DECL_WRAPPER(NotifyHeaderTip, void, SynchronizationState, int64_t height, int64_t timestamp, bool presync);
/** Banlist did change. */
ADD_SIGNALS_DECL_WRAPPER(BannedListChanged, void, void);
diff --git a/src/node/interfaces.cpp b/src/node/interfaces.cpp
index 2c845d0127..aa7ddec770 100644
--- a/src/node/interfaces.cpp
+++ b/src/node/interfaces.cpp
@@ -377,9 +377,8 @@ public:
std::unique_ptr<Handler> handleNotifyHeaderTip(NotifyHeaderTipFn fn) override
{
return MakeHandler(
- ::uiInterface.NotifyHeaderTip_connect([fn](SynchronizationState sync_state, const CBlockIndex* block) {
- fn(sync_state, BlockTip{block->nHeight, block->GetBlockTime(), block->GetBlockHash()},
- /* verification progress is unused when a header was received */ 0);
+ ::uiInterface.NotifyHeaderTip_connect([fn](SynchronizationState sync_state, int64_t height, int64_t timestamp, bool presync) {
+ fn(sync_state, BlockTip{(int)height, timestamp, uint256{}}, presync);
}));
}
NodeContext* context() override { return m_context; }
@@ -400,7 +399,7 @@ bool FillBlock(const CBlockIndex* index, const FoundBlock& block, UniqueLock<Rec
if (block.m_max_time) *block.m_max_time = index->GetBlockTimeMax();
if (block.m_mtp_time) *block.m_mtp_time = index->GetMedianTimePast();
if (block.m_in_active_chain) *block.m_in_active_chain = active[index->nHeight] == index;
- if (block.m_locator) { *block.m_locator = active.GetLocator(index); }
+ if (block.m_locator) { *block.m_locator = GetLocator(index); }
if (block.m_next_block) FillBlock(active[index->nHeight] == index ? active[index->nHeight + 1] : nullptr, *block.m_next_block, lock, active);
if (block.m_data) {
REVERSE_LOCK(lock);
@@ -527,8 +526,7 @@ public:
{
LOCK(::cs_main);
const CBlockIndex* index = chainman().m_blockman.LookupBlockIndex(block_hash);
- if (!index) return {};
- return chainman().ActiveChain().GetLocator(index);
+ return GetLocator(index);
}
std::optional<int> findLocatorFork(const CBlockLocator& locator) override
{
diff --git a/src/pow.cpp b/src/pow.cpp
index 1414d37564..c0449cac74 100644
--- a/src/pow.cpp
+++ b/src/pow.cpp
@@ -71,6 +71,57 @@ unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nF
return bnNew.GetCompact();
}
+// Check that on difficulty adjustments, the new difficulty does not increase
+// or decrease beyond the permitted limits.
+bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits)
+{
+ if (params.fPowAllowMinDifficultyBlocks) return true;
+
+ if (height % params.DifficultyAdjustmentInterval() == 0) {
+ int64_t smallest_timespan = params.nPowTargetTimespan/4;
+ int64_t largest_timespan = params.nPowTargetTimespan*4;
+
+ const arith_uint256 pow_limit = UintToArith256(params.powLimit);
+ arith_uint256 observed_new_target;
+ observed_new_target.SetCompact(new_nbits);
+
+ // Calculate the largest difficulty value possible:
+ arith_uint256 largest_difficulty_target;
+ largest_difficulty_target.SetCompact(old_nbits);
+ largest_difficulty_target *= largest_timespan;
+ largest_difficulty_target /= params.nPowTargetTimespan;
+
+ if (largest_difficulty_target > pow_limit) {
+ largest_difficulty_target = pow_limit;
+ }
+
+ // Round and then compare this new calculated value to what is
+ // observed.
+ arith_uint256 maximum_new_target;
+ maximum_new_target.SetCompact(largest_difficulty_target.GetCompact());
+ if (maximum_new_target < observed_new_target) return false;
+
+ // Calculate the smallest difficulty value possible:
+ arith_uint256 smallest_difficulty_target;
+ smallest_difficulty_target.SetCompact(old_nbits);
+ smallest_difficulty_target *= smallest_timespan;
+ smallest_difficulty_target /= params.nPowTargetTimespan;
+
+ if (smallest_difficulty_target > pow_limit) {
+ smallest_difficulty_target = pow_limit;
+ }
+
+ // Round and then compare this new calculated value to what is
+ // observed.
+ arith_uint256 minimum_new_target;
+ minimum_new_target.SetCompact(smallest_difficulty_target.GetCompact());
+ if (minimum_new_target > observed_new_target) return false;
+ } else if (old_nbits != new_nbits) {
+ return false;
+ }
+ return true;
+}
+
bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
bool fNegative;
diff --git a/src/pow.h b/src/pow.h
index 1d802cd01e..44b9d673ef 100644
--- a/src/pow.h
+++ b/src/pow.h
@@ -20,4 +20,18 @@ unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nF
/** Check whether a block hash satisfies the proof-of-work requirement specified by nBits */
bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params&);
+/**
+ * Return false if the proof-of-work requirement specified by new_nbits at a
+ * given height is not possible, given the proof-of-work on the prior block as
+ * specified by old_nbits.
+ *
+ * This function only checks that the new value is within a factor of 4 of the
+ * old value for blocks at the difficulty adjustment interval, and otherwise
+ * requires the values to be the same.
+ *
+ * Always returns true on networks where min difficulty blocks are allowed,
+ * such as regtest/testnet.
+ */
+bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits);
+
#endif // BITCOIN_POW_H
diff --git a/src/qt/bitcoin.cpp b/src/qt/bitcoin.cpp
index 33c60deafb..cc01e4d54a 100644
--- a/src/qt/bitcoin.cpp
+++ b/src/qt/bitcoin.cpp
@@ -75,6 +75,7 @@ Q_IMPORT_PLUGIN(QAndroidPlatformIntegrationPlugin)
Q_DECLARE_METATYPE(bool*)
Q_DECLARE_METATYPE(CAmount)
Q_DECLARE_METATYPE(SynchronizationState)
+Q_DECLARE_METATYPE(SyncType)
Q_DECLARE_METATYPE(uint256)
static void RegisterMetaTypes()
@@ -82,6 +83,7 @@ static void RegisterMetaTypes()
// Register meta types used for QMetaObject::invokeMethod and Qt::QueuedConnection
qRegisterMetaType<bool*>();
qRegisterMetaType<SynchronizationState>();
+ qRegisterMetaType<SyncType>();
#ifdef ENABLE_WALLET
qRegisterMetaType<WalletModel*>();
#endif
diff --git a/src/qt/bitcoingui.cpp b/src/qt/bitcoingui.cpp
index 90f228803c..1c1328e000 100644
--- a/src/qt/bitcoingui.cpp
+++ b/src/qt/bitcoingui.cpp
@@ -615,8 +615,8 @@ void BitcoinGUI::setClientModel(ClientModel *_clientModel, interfaces::BlockAndH
connect(_clientModel, &ClientModel::numConnectionsChanged, this, &BitcoinGUI::setNumConnections);
connect(_clientModel, &ClientModel::networkActiveChanged, this, &BitcoinGUI::setNetworkActive);
- modalOverlay->setKnownBestHeight(tip_info->header_height, QDateTime::fromSecsSinceEpoch(tip_info->header_time));
- setNumBlocks(tip_info->block_height, QDateTime::fromSecsSinceEpoch(tip_info->block_time), tip_info->verification_progress, false, SynchronizationState::INIT_DOWNLOAD);
+ modalOverlay->setKnownBestHeight(tip_info->header_height, QDateTime::fromSecsSinceEpoch(tip_info->header_time), /*presync=*/false);
+ setNumBlocks(tip_info->block_height, QDateTime::fromSecsSinceEpoch(tip_info->block_time), tip_info->verification_progress, SyncType::BLOCK_SYNC, SynchronizationState::INIT_DOWNLOAD);
connect(_clientModel, &ClientModel::numBlocksChanged, this, &BitcoinGUI::setNumBlocks);
// Receive and report messages from client model
@@ -1026,6 +1026,13 @@ void BitcoinGUI::updateHeadersSyncProgressLabel()
progressBarLabel->setText(tr("Syncing Headers (%1%)…").arg(QString::number(100.0 / (headersTipHeight+estHeadersLeft)*headersTipHeight, 'f', 1)));
}
+void BitcoinGUI::updateHeadersPresyncProgressLabel(int64_t height, const QDateTime& blockDate)
+{
+ int estHeadersLeft = blockDate.secsTo(QDateTime::currentDateTime()) / Params().GetConsensus().nPowTargetSpacing;
+ if (estHeadersLeft > HEADER_HEIGHT_DELTA_SYNC)
+ progressBarLabel->setText(tr("Pre-syncing Headers (%1%)…").arg(QString::number(100.0 / (height+estHeadersLeft)*height, 'f', 1)));
+}
+
void BitcoinGUI::openOptionsDialogWithTab(OptionsDialog::Tab tab)
{
if (!clientModel || !clientModel->getOptionsModel())
@@ -1039,7 +1046,7 @@ void BitcoinGUI::openOptionsDialogWithTab(OptionsDialog::Tab tab)
GUIUtil::ShowModalDialogAsynchronously(dlg);
}
-void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool header, SynchronizationState sync_state)
+void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state)
{
// Disabling macOS App Nap on initial sync, disk and reindex operations.
#ifdef Q_OS_MACOS
@@ -1052,8 +1059,8 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
if (modalOverlay)
{
- if (header)
- modalOverlay->setKnownBestHeight(count, blockDate);
+ if (synctype != SyncType::BLOCK_SYNC)
+ modalOverlay->setKnownBestHeight(count, blockDate, synctype == SyncType::HEADER_PRESYNC);
else
modalOverlay->tipUpdate(count, blockDate, nVerificationProgress);
}
@@ -1067,7 +1074,10 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
enum BlockSource blockSource = clientModel->getBlockSource();
switch (blockSource) {
case BlockSource::NETWORK:
- if (header) {
+ if (synctype == SyncType::HEADER_PRESYNC) {
+ updateHeadersPresyncProgressLabel(count, blockDate);
+ return;
+ } else if (synctype == SyncType::HEADER_SYNC) {
updateHeadersSyncProgressLabel();
return;
}
@@ -1075,7 +1085,7 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
updateHeadersSyncProgressLabel();
break;
case BlockSource::DISK:
- if (header) {
+ if (synctype != SyncType::BLOCK_SYNC) {
progressBarLabel->setText(tr("Indexing blocks on disk…"));
} else {
progressBarLabel->setText(tr("Processing blocks on disk…"));
@@ -1085,7 +1095,7 @@ void BitcoinGUI::setNumBlocks(int count, const QDateTime& blockDate, double nVer
progressBarLabel->setText(tr("Reindexing blocks on disk…"));
break;
case BlockSource::NONE:
- if (header) {
+ if (synctype != SyncType::BLOCK_SYNC) {
return;
}
progressBarLabel->setText(tr("Connecting to peers…"));
diff --git a/src/qt/bitcoingui.h b/src/qt/bitcoingui.h
index b2e13245e1..912e9b95aa 100644
--- a/src/qt/bitcoingui.h
+++ b/src/qt/bitcoingui.h
@@ -10,6 +10,7 @@
#endif
#include <qt/bitcoinunits.h>
+#include <qt/clientmodel.h>
#include <qt/guiutil.h>
#include <qt/optionsdialog.h>
@@ -28,7 +29,6 @@
#include <memory>
-class ClientModel;
class NetworkStyle;
class Notificator;
class OptionsModel;
@@ -208,6 +208,7 @@ private:
void updateNetworkState();
void updateHeadersSyncProgressLabel();
+ void updateHeadersPresyncProgressLabel(int64_t height, const QDateTime& blockDate);
/** Open the OptionsDialog on the specified tab index */
void openOptionsDialogWithTab(OptionsDialog::Tab tab);
@@ -226,7 +227,7 @@ public Q_SLOTS:
/** Set network state shown in the UI */
void setNetworkActive(bool network_active);
/** Set number of blocks and last block date shown in the UI */
- void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers, SynchronizationState sync_state);
+ void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state);
/** Notify the user of an event from the core network or transaction handling code.
@param[in] title the message box / notification title
diff --git a/src/qt/clientmodel.cpp b/src/qt/clientmodel.cpp
index f41da519df..092ffe7e5b 100644
--- a/src/qt/clientmodel.cpp
+++ b/src/qt/clientmodel.cpp
@@ -215,26 +215,26 @@ QString ClientModel::blocksDir() const
return GUIUtil::PathToQString(gArgs.GetBlocksDirPath());
}
-void ClientModel::TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, bool header)
+void ClientModel::TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, SyncType synctype)
{
- if (header) {
+ if (synctype == SyncType::HEADER_SYNC) {
// cache best headers time and height to reduce future cs_main locks
cachedBestHeaderHeight = tip.block_height;
cachedBestHeaderTime = tip.block_time;
- } else {
+ } else if (synctype == SyncType::BLOCK_SYNC) {
m_cached_num_blocks = tip.block_height;
WITH_LOCK(m_cached_tip_mutex, m_cached_tip_blocks = tip.block_hash;);
}
// Throttle GUI notifications about (a) blocks during initial sync, and (b) both blocks and headers during reindex.
- const bool throttle = (sync_state != SynchronizationState::POST_INIT && !header) || sync_state == SynchronizationState::INIT_REINDEX;
+ const bool throttle = (sync_state != SynchronizationState::POST_INIT && synctype == SyncType::BLOCK_SYNC) || sync_state == SynchronizationState::INIT_REINDEX;
const int64_t now = throttle ? GetTimeMillis() : 0;
- int64_t& nLastUpdateNotification = header ? nLastHeaderTipUpdateNotification : nLastBlockTipUpdateNotification;
+ int64_t& nLastUpdateNotification = synctype != SyncType::BLOCK_SYNC ? nLastHeaderTipUpdateNotification : nLastBlockTipUpdateNotification;
if (throttle && now < nLastUpdateNotification + count_milliseconds(MODEL_UPDATE_DELAY)) {
return;
}
- Q_EMIT numBlocksChanged(tip.block_height, QDateTime::fromSecsSinceEpoch(tip.block_time), verification_progress, header, sync_state);
+ Q_EMIT numBlocksChanged(tip.block_height, QDateTime::fromSecsSinceEpoch(tip.block_time), verification_progress, synctype, sync_state);
nLastUpdateNotification = now;
}
@@ -264,11 +264,11 @@ void ClientModel::subscribeToCoreSignals()
});
m_handler_notify_block_tip = m_node.handleNotifyBlockTip(
[this](SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress) {
- TipChanged(sync_state, tip, verification_progress, /*header=*/false);
+ TipChanged(sync_state, tip, verification_progress, SyncType::BLOCK_SYNC);
});
m_handler_notify_header_tip = m_node.handleNotifyHeaderTip(
- [this](SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress) {
- TipChanged(sync_state, tip, verification_progress, /*header=*/true);
+ [this](SynchronizationState sync_state, interfaces::BlockTip tip, bool presync) {
+ TipChanged(sync_state, tip, /*verification_progress=*/0.0, presync ? SyncType::HEADER_PRESYNC : SyncType::HEADER_SYNC);
});
}
diff --git a/src/qt/clientmodel.h b/src/qt/clientmodel.h
index 81f03a58ec..4a6abd6a76 100644
--- a/src/qt/clientmodel.h
+++ b/src/qt/clientmodel.h
@@ -37,6 +37,12 @@ enum class BlockSource {
NETWORK
};
+enum class SyncType {
+ HEADER_PRESYNC,
+ HEADER_SYNC,
+ BLOCK_SYNC
+};
+
enum NumConnections {
CONNECTIONS_NONE = 0,
CONNECTIONS_IN = (1U << 0),
@@ -105,13 +111,13 @@ private:
//! A thread to interact with m_node asynchronously
QThread* const m_thread;
- void TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, bool header) EXCLUSIVE_LOCKS_REQUIRED(!m_cached_tip_mutex);
+ void TipChanged(SynchronizationState sync_state, interfaces::BlockTip tip, double verification_progress, SyncType synctype) EXCLUSIVE_LOCKS_REQUIRED(!m_cached_tip_mutex);
void subscribeToCoreSignals();
void unsubscribeFromCoreSignals();
Q_SIGNALS:
void numConnectionsChanged(int count);
- void numBlocksChanged(int count, const QDateTime& blockDate, double nVerificationProgress, bool header, SynchronizationState sync_state);
+ void numBlocksChanged(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType header, SynchronizationState sync_state);
void mempoolSizeChanged(long count, size_t mempoolSizeInBytes);
void networkActiveChanged(bool networkActive);
void alertsChanged(const QString &warnings);
diff --git a/src/qt/modaloverlay.cpp b/src/qt/modaloverlay.cpp
index 97ee75a31f..dfa33764f6 100644
--- a/src/qt/modaloverlay.cpp
+++ b/src/qt/modaloverlay.cpp
@@ -78,13 +78,16 @@ bool ModalOverlay::event(QEvent* ev) {
return QWidget::event(ev);
}
-void ModalOverlay::setKnownBestHeight(int count, const QDateTime& blockDate)
+void ModalOverlay::setKnownBestHeight(int count, const QDateTime& blockDate, bool presync)
{
- if (count > bestHeaderHeight) {
+ if (!presync && count > bestHeaderHeight) {
bestHeaderHeight = count;
bestHeaderDate = blockDate;
UpdateHeaderSyncLabel();
}
+ if (presync) {
+ UpdateHeaderPresyncLabel(count, blockDate);
+ }
}
void ModalOverlay::tipUpdate(int count, const QDateTime& blockDate, double nVerificationProgress)
@@ -158,6 +161,11 @@ void ModalOverlay::UpdateHeaderSyncLabel() {
ui->numberOfBlocksLeft->setText(tr("Unknown. Syncing Headers (%1, %2%)…").arg(bestHeaderHeight).arg(QString::number(100.0 / (bestHeaderHeight + est_headers_left) * bestHeaderHeight, 'f', 1)));
}
+void ModalOverlay::UpdateHeaderPresyncLabel(int height, const QDateTime& blockDate) {
+ int est_headers_left = blockDate.secsTo(QDateTime::currentDateTime()) / Params().GetConsensus().nPowTargetSpacing;
+ ui->numberOfBlocksLeft->setText(tr("Unknown. Pre-syncing Headers (%1, %2%)…").arg(height).arg(QString::number(100.0 / (height + est_headers_left) * height, 'f', 1)));
+}
+
void ModalOverlay::toggleVisibility()
{
showHide(layerIsVisible, true);
diff --git a/src/qt/modaloverlay.h b/src/qt/modaloverlay.h
index 1d8af5cbf6..682c94cd01 100644
--- a/src/qt/modaloverlay.h
+++ b/src/qt/modaloverlay.h
@@ -26,7 +26,7 @@ public:
~ModalOverlay();
void tipUpdate(int count, const QDateTime& blockDate, double nVerificationProgress);
- void setKnownBestHeight(int count, const QDateTime& blockDate);
+ void setKnownBestHeight(int count, const QDateTime& blockDate, bool presync);
// will show or hide the modal layer
void showHide(bool hide = false, bool userRequested = false);
@@ -52,6 +52,7 @@ private:
bool userClosed;
QPropertyAnimation m_animation;
void UpdateHeaderSyncLabel();
+ void UpdateHeaderPresyncLabel(int height, const QDateTime& blockDate);
};
#endif // BITCOIN_QT_MODALOVERLAY_H
diff --git a/src/qt/rpcconsole.cpp b/src/qt/rpcconsole.cpp
index 70fccdef1c..8c0b8cc3ab 100644
--- a/src/qt/rpcconsole.cpp
+++ b/src/qt/rpcconsole.cpp
@@ -661,7 +661,7 @@ void RPCConsole::setClientModel(ClientModel *model, int bestblock_height, int64_
setNumConnections(model->getNumConnections());
connect(model, &ClientModel::numConnectionsChanged, this, &RPCConsole::setNumConnections);
- setNumBlocks(bestblock_height, QDateTime::fromSecsSinceEpoch(bestblock_date), verification_progress, false);
+ setNumBlocks(bestblock_height, QDateTime::fromSecsSinceEpoch(bestblock_date), verification_progress, SyncType::BLOCK_SYNC);
connect(model, &ClientModel::numBlocksChanged, this, &RPCConsole::setNumBlocks);
updateNetworkState();
@@ -973,9 +973,9 @@ void RPCConsole::setNetworkActive(bool networkActive)
updateNetworkState();
}
-void RPCConsole::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers)
+void RPCConsole::setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype)
{
- if (!headers) {
+ if (synctype == SyncType::BLOCK_SYNC) {
ui->numberOfBlocks->setText(QString::number(count));
ui->lastBlockTime->setText(blockDate.toString());
}
diff --git a/src/qt/rpcconsole.h b/src/qt/rpcconsole.h
index 1a54fe0cad..a3c713e966 100644
--- a/src/qt/rpcconsole.h
+++ b/src/qt/rpcconsole.h
@@ -9,6 +9,7 @@
#include <config/bitcoin-config.h>
#endif
+#include <qt/clientmodel.h>
#include <qt/guiutil.h>
#include <qt/peertablemodel.h>
@@ -19,7 +20,6 @@
#include <QThread>
#include <QWidget>
-class ClientModel;
class PlatformStyle;
class RPCExecutor;
class RPCTimerInterface;
@@ -121,7 +121,7 @@ public Q_SLOTS:
/** Set network state shown in the UI */
void setNetworkActive(bool networkActive);
/** Set number of blocks and last block date shown in the UI */
- void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers);
+ void setNumBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype);
/** Set size (number of transactions and memory usage) of the mempool in the UI */
void setMempoolSize(long numberOfTxs, size_t dynUsage);
/** Go forward or back in history */
diff --git a/src/qt/sendcoinsdialog.cpp b/src/qt/sendcoinsdialog.cpp
index a75c1098a5..53c352b393 100644
--- a/src/qt/sendcoinsdialog.cpp
+++ b/src/qt/sendcoinsdialog.cpp
@@ -839,7 +839,7 @@ void SendCoinsDialog::updateCoinControlState()
m_coin_control->fAllowWatchOnly = model->wallet().privateKeysDisabled() && !model->wallet().hasExternalSigner();
}
-void SendCoinsDialog::updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers, SynchronizationState sync_state) {
+void SendCoinsDialog::updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state) {
if (sync_state == SynchronizationState::POST_INIT) {
updateSmartFeeLabel();
}
diff --git a/src/qt/sendcoinsdialog.h b/src/qt/sendcoinsdialog.h
index b58d4690a0..dcdf189532 100644
--- a/src/qt/sendcoinsdialog.h
+++ b/src/qt/sendcoinsdialog.h
@@ -5,6 +5,7 @@
#ifndef BITCOIN_QT_SENDCOINSDIALOG_H
#define BITCOIN_QT_SENDCOINSDIALOG_H
+#include <qt/clientmodel.h>
#include <qt/walletmodel.h>
#include <QDialog>
@@ -12,7 +13,6 @@
#include <QString>
#include <QTimer>
-class ClientModel;
class PlatformStyle;
class SendCoinsEntry;
class SendCoinsRecipient;
@@ -111,7 +111,7 @@ private Q_SLOTS:
void coinControlClipboardLowOutput();
void coinControlClipboardChange();
void updateFeeSectionControls();
- void updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, bool headers, SynchronizationState sync_state);
+ void updateNumberOfBlocks(int count, const QDateTime& blockDate, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state);
void updateSmartFeeLabel();
Q_SIGNALS:
diff --git a/src/rpc/mining.cpp b/src/rpc/mining.cpp
index 91feb2c24b..1ad704a490 100644
--- a/src/rpc/mining.cpp
+++ b/src/rpc/mining.cpp
@@ -132,7 +132,7 @@ static bool GenerateBlock(ChainstateManager& chainman, CBlock& block, uint64_t&
}
std::shared_ptr<const CBlock> shared_pblock = std::make_shared<const CBlock>(block);
- if (!chainman.ProcessNewBlock(shared_pblock, true, nullptr)) {
+ if (!chainman.ProcessNewBlock(shared_pblock, /*force_processing=*/true, /*min_pow_checked=*/true, nullptr)) {
throw JSONRPCError(RPC_INTERNAL_ERROR, "ProcessNewBlock, block not accepted");
}
@@ -981,7 +981,7 @@ static RPCHelpMan submitblock()
bool new_block;
auto sc = std::make_shared<submitblock_StateCatcher>(block.GetHash());
RegisterSharedValidationInterface(sc);
- bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*new_block=*/&new_block);
+ bool accepted = chainman.ProcessNewBlock(blockptr, /*force_processing=*/true, /*min_pow_checked=*/true, /*new_block=*/&new_block);
UnregisterSharedValidationInterface(sc);
if (!new_block && accepted) {
return "duplicate";
@@ -1023,7 +1023,7 @@ static RPCHelpMan submitheader()
}
BlockValidationState state;
- chainman.ProcessNewBlockHeaders({h}, state);
+ chainman.ProcessNewBlockHeaders({h}, /*min_pow_checked=*/true, state);
if (state.IsValid()) return UniValue::VNULL;
if (state.IsError()) {
throw JSONRPCError(RPC_VERIFY_ERROR, state.ToString());
diff --git a/src/rpc/net.cpp b/src/rpc/net.cpp
index 06f46040b8..88584ff25f 100644
--- a/src/rpc/net.cpp
+++ b/src/rpc/net.cpp
@@ -132,6 +132,7 @@ static RPCHelpMan getpeerinfo()
{RPCResult::Type::BOOL, "bip152_hb_to", "Whether we selected peer as (compact blocks) high-bandwidth peer"},
{RPCResult::Type::BOOL, "bip152_hb_from", "Whether peer selected us as (compact blocks) high-bandwidth peer"},
{RPCResult::Type::NUM, "startingheight", /*optional=*/true, "The starting height (block) of the peer"},
+ {RPCResult::Type::NUM, "presynced_headers", /*optional=*/true, "The current height of header pre-synchronization with this peer, or -1 if no low-work sync is in progress"},
{RPCResult::Type::NUM, "synced_headers", /*optional=*/true, "The last header we have in common with this peer"},
{RPCResult::Type::NUM, "synced_blocks", /*optional=*/true, "The last block we have in common with this peer"},
{RPCResult::Type::ARR, "inflight", /*optional=*/true, "",
@@ -226,6 +227,7 @@ static RPCHelpMan getpeerinfo()
obj.pushKV("bip152_hb_from", stats.m_bip152_highbandwidth_from);
if (fStateStats) {
obj.pushKV("startingheight", statestats.m_starting_height);
+ obj.pushKV("presynced_headers", statestats.presync_height);
obj.pushKV("synced_headers", statestats.nSyncHeight);
obj.pushKV("synced_blocks", statestats.nCommonHeight);
UniValue heights(UniValue::VARR);
diff --git a/src/test/blockfilter_index_tests.cpp b/src/test/blockfilter_index_tests.cpp
index 1a182209b8..2798e998af 100644
--- a/src/test/blockfilter_index_tests.cpp
+++ b/src/test/blockfilter_index_tests.cpp
@@ -101,7 +101,7 @@ bool BuildChainTestingSetup::BuildChain(const CBlockIndex* pindex,
CBlockHeader header = block->GetBlockHeader();
BlockValidationState state;
- if (!Assert(m_node.chainman)->ProcessNewBlockHeaders({header}, state, &pindex)) {
+ if (!Assert(m_node.chainman)->ProcessNewBlockHeaders({header}, true, state, &pindex)) {
return false;
}
}
@@ -178,7 +178,7 @@ BOOST_FIXTURE_TEST_CASE(blockfilter_index_initial_sync, BuildChainTestingSetup)
uint256 chainA_last_header = last_header;
for (size_t i = 0; i < 2; i++) {
const auto& block = chainA[i];
- BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, nullptr));
+ BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, true, nullptr));
}
for (size_t i = 0; i < 2; i++) {
const auto& block = chainA[i];
@@ -196,7 +196,7 @@ BOOST_FIXTURE_TEST_CASE(blockfilter_index_initial_sync, BuildChainTestingSetup)
uint256 chainB_last_header = last_header;
for (size_t i = 0; i < 3; i++) {
const auto& block = chainB[i];
- BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, nullptr));
+ BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, true, nullptr));
}
for (size_t i = 0; i < 3; i++) {
const auto& block = chainB[i];
@@ -227,7 +227,7 @@ BOOST_FIXTURE_TEST_CASE(blockfilter_index_initial_sync, BuildChainTestingSetup)
// Reorg back to chain A.
for (size_t i = 2; i < 4; i++) {
const auto& block = chainA[i];
- BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, nullptr));
+ BOOST_REQUIRE(Assert(m_node.chainman)->ProcessNewBlock(block, true, true, nullptr));
}
// Check that chain A and B blocks can be retrieved.
diff --git a/src/test/coinstatsindex_tests.cpp b/src/test/coinstatsindex_tests.cpp
index c93d05a93b..132c4e53e7 100644
--- a/src/test/coinstatsindex_tests.cpp
+++ b/src/test/coinstatsindex_tests.cpp
@@ -102,7 +102,7 @@ BOOST_FIXTURE_TEST_CASE(coinstatsindex_unclean_shutdown, TestChain100Setup)
LOCK(cs_main);
BlockValidationState state;
BOOST_CHECK(CheckBlock(block, state, params.GetConsensus()));
- BOOST_CHECK(chainstate.AcceptBlock(new_block, state, &new_block_index, true, nullptr, nullptr));
+ BOOST_CHECK(chainstate.AcceptBlock(new_block, state, &new_block_index, true, nullptr, nullptr, true));
CCoinsViewCache view(&chainstate.CoinsTip());
BOOST_CHECK(chainstate.ConnectBlock(block, state, new_block_index, view));
}
diff --git a/src/test/fuzz/bitdeque.cpp b/src/test/fuzz/bitdeque.cpp
new file mode 100644
index 0000000000..01af8320b5
--- /dev/null
+++ b/src/test/fuzz/bitdeque.cpp
@@ -0,0 +1,542 @@
+// Copyright (c) 2022 The Bitcoin Core developers
+// Distributed under the MIT software license, see the accompanying
+// file COPYING or http://www.opensource.org/licenses/mit-license.php.
+
+#include <util/bitdeque.h>
+
+#include <random.h>
+#include <test/fuzz/FuzzedDataProvider.h>
+#include <test/fuzz/util.h>
+
+#include <deque>
+#include <vector>
+
+namespace {
+
+constexpr int LEN_BITS = 16;
+constexpr int RANDDATA_BITS = 20;
+
+using bitdeque_type = bitdeque<128>;
+
+//! Deterministic random vector of bools, for begin/end insertions to draw from.
+std::vector<bool> RANDDATA;
+
+void InitRandData()
+{
+ FastRandomContext ctx(true);
+ RANDDATA.clear();
+ for (size_t i = 0; i < (1U << RANDDATA_BITS) + (1U << LEN_BITS); ++i) {
+ RANDDATA.push_back(ctx.randbool());
+ }
+}
+
+} // namespace
+
+FUZZ_TARGET_INIT(bitdeque, InitRandData)
+{
+ FuzzedDataProvider provider(buffer.data(), buffer.size());
+ FastRandomContext ctx(true);
+
+ size_t maxlen = (1U << provider.ConsumeIntegralInRange<size_t>(0, LEN_BITS)) - 1;
+ size_t limitlen = 4 * maxlen;
+
+ std::deque<bool> deq;
+ bitdeque_type bitdeq;
+
+ const auto& cdeq = deq;
+ const auto& cbitdeq = bitdeq;
+
+ size_t initlen = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ while (initlen) {
+ bool val = ctx.randbool();
+ deq.push_back(val);
+ bitdeq.push_back(val);
+ --initlen;
+ }
+
+ while (provider.remaining_bytes()) {
+ {
+ assert(deq.size() == bitdeq.size());
+ auto it = deq.begin();
+ auto bitit = bitdeq.begin();
+ auto itend = deq.end();
+ while (it != itend) {
+ assert(*it == *bitit);
+ ++it;
+ ++bitit;
+ }
+ }
+
+ CallOneOf(provider,
+ [&] {
+ // constructor()
+ deq = std::deque<bool>{};
+ bitdeq = bitdeque_type{};
+ },
+ [&] {
+ // clear()
+ deq.clear();
+ bitdeq.clear();
+ },
+ [&] {
+ // resize()
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ deq.resize(count);
+ bitdeq.resize(count);
+ },
+ [&] {
+ // assign(count, val)
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ bool val = ctx.randbool();
+ deq.assign(count, val);
+ bitdeq.assign(count, val);
+ },
+ [&] {
+ // constructor(count, val)
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ bool val = ctx.randbool();
+ deq = std::deque<bool>(count, val);
+ bitdeq = bitdeque_type(count, val);
+ },
+ [&] {
+ // constructor(count)
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ deq = std::deque<bool>(count);
+ bitdeq = bitdeque_type(count);
+ },
+ [&] {
+ // construct(begin, end)
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
+ auto rand_end = rand_begin + count;
+ deq = std::deque<bool>(rand_begin, rand_end);
+ bitdeq = bitdeque_type(rand_begin, rand_end);
+ },
+ [&] {
+ // assign(begin, end)
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
+ auto rand_end = rand_begin + count;
+ deq.assign(rand_begin, rand_end);
+ bitdeq.assign(rand_begin, rand_end);
+ },
+ [&] {
+ // construct(initializer_list)
+ std::initializer_list<bool> ilist{ctx.randbool(), ctx.randbool(), ctx.randbool(), ctx.randbool(), ctx.randbool()};
+ deq = std::deque<bool>(ilist);
+ bitdeq = bitdeque_type(ilist);
+ },
+ [&] {
+ // assign(initializer_list)
+ std::initializer_list<bool> ilist{ctx.randbool(), ctx.randbool(), ctx.randbool()};
+ deq.assign(ilist);
+ bitdeq.assign(ilist);
+ },
+ [&] {
+ // operator=(const&)
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ bool val = ctx.randbool();
+ const std::deque<bool> deq2(count, val);
+ deq = deq2;
+ const bitdeque_type bitdeq2(count, val);
+ bitdeq = bitdeq2;
+ },
+ [&] {
+ // operator=(&&)
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ bool val = ctx.randbool();
+ std::deque<bool> deq2(count, val);
+ deq = std::move(deq2);
+ bitdeque_type bitdeq2(count, val);
+ bitdeq = std::move(bitdeq2);
+ },
+ [&] {
+ // deque swap
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
+ auto rand_end = rand_begin + count;
+ std::deque<bool> deq2(rand_begin, rand_end);
+ bitdeque_type bitdeq2(rand_begin, rand_end);
+ using std::swap;
+ assert(deq.size() == bitdeq.size());
+ assert(deq2.size() == bitdeq2.size());
+ swap(deq, deq2);
+ swap(bitdeq, bitdeq2);
+ assert(deq.size() == bitdeq.size());
+ assert(deq2.size() == bitdeq2.size());
+ },
+ [&] {
+ // deque.swap
+ auto count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
+ auto rand_end = rand_begin + count;
+ std::deque<bool> deq2(rand_begin, rand_end);
+ bitdeque_type bitdeq2(rand_begin, rand_end);
+ assert(deq.size() == bitdeq.size());
+ assert(deq2.size() == bitdeq2.size());
+ deq.swap(deq2);
+ bitdeq.swap(bitdeq2);
+ assert(deq.size() == bitdeq.size());
+ assert(deq2.size() == bitdeq2.size());
+ },
+ [&] {
+ // operator=(initializer_list)
+ std::initializer_list<bool> ilist{ctx.randbool(), ctx.randbool(), ctx.randbool()};
+ deq = ilist;
+ bitdeq = ilist;
+ },
+ [&] {
+ // iterator arithmetic
+ auto pos1 = provider.ConsumeIntegralInRange<long>(0, cdeq.size());
+ auto pos2 = provider.ConsumeIntegralInRange<long>(0, cdeq.size());
+ auto it = deq.begin() + pos1;
+ auto bitit = bitdeq.begin() + pos1;
+ if ((size_t)pos1 != cdeq.size()) assert(*it == *bitit);
+ assert(it - deq.begin() == pos1);
+ assert(bitit - bitdeq.begin() == pos1);
+ if (provider.ConsumeBool()) {
+ it += pos2 - pos1;
+ bitit += pos2 - pos1;
+ } else {
+ it -= pos1 - pos2;
+ bitit -= pos1 - pos2;
+ }
+ if ((size_t)pos2 != cdeq.size()) assert(*it == *bitit);
+ assert(deq.end() - it == bitdeq.end() - bitit);
+ if (provider.ConsumeBool()) {
+ if ((size_t)pos2 != cdeq.size()) {
+ ++it;
+ ++bitit;
+ }
+ } else {
+ if (pos2 != 0) {
+ --it;
+ --bitit;
+ }
+ }
+ assert(deq.end() - it == bitdeq.end() - bitit);
+ },
+ [&] {
+ // begin() and end()
+ assert(deq.end() - deq.begin() == bitdeq.end() - bitdeq.begin());
+ },
+ [&] {
+ // begin() and end() (const)
+ assert(cdeq.end() - cdeq.begin() == cbitdeq.end() - cbitdeq.begin());
+ },
+ [&] {
+ // rbegin() and rend()
+ assert(deq.rend() - deq.rbegin() == bitdeq.rend() - bitdeq.rbegin());
+ },
+ [&] {
+ // rbegin() and rend() (const)
+ assert(cdeq.rend() - cdeq.rbegin() == cbitdeq.rend() - cbitdeq.rbegin());
+ },
+ [&] {
+ // cbegin() and cend()
+ assert(cdeq.cend() - cdeq.cbegin() == cbitdeq.cend() - cbitdeq.cbegin());
+ },
+ [&] {
+ // crbegin() and crend()
+ assert(cdeq.crend() - cdeq.crbegin() == cbitdeq.crend() - cbitdeq.crbegin());
+ },
+ [&] {
+ // size() and maxsize()
+ assert(cdeq.size() == cbitdeq.size());
+ assert(cbitdeq.size() <= cbitdeq.max_size());
+ },
+ [&] {
+ // empty
+ assert(cdeq.empty() == cbitdeq.empty());
+ },
+ [&] {
+ // at (in range) and flip
+ if (!cdeq.empty()) {
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
+ auto& ref = deq.at(pos);
+ auto bitref = bitdeq.at(pos);
+ assert(ref == bitref);
+ if (ctx.randbool()) {
+ ref = !ref;
+ bitref.flip();
+ }
+ }
+ },
+ [&] {
+ // at (maybe out of range) and bit assign
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() + maxlen);
+ bool newval = ctx.randbool();
+ bool throw_deq{false}, throw_bitdeq{false};
+ bool val_deq{false}, val_bitdeq{false};
+ try {
+ auto& ref = deq.at(pos);
+ val_deq = ref;
+ ref = newval;
+ } catch (const std::out_of_range&) {
+ throw_deq = true;
+ }
+ try {
+ auto ref = bitdeq.at(pos);
+ val_bitdeq = ref;
+ ref = newval;
+ } catch (const std::out_of_range&) {
+ throw_bitdeq = true;
+ }
+ assert(throw_deq == throw_bitdeq);
+ assert(throw_bitdeq == (pos >= cdeq.size()));
+ if (!throw_deq) assert(val_deq == val_bitdeq);
+ },
+ [&] {
+ // at (maybe out of range) (const)
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() + maxlen);
+ bool throw_deq{false}, throw_bitdeq{false};
+ bool val_deq{false}, val_bitdeq{false};
+ try {
+ auto& ref = cdeq.at(pos);
+ val_deq = ref;
+ } catch (const std::out_of_range&) {
+ throw_deq = true;
+ }
+ try {
+ auto ref = cbitdeq.at(pos);
+ val_bitdeq = ref;
+ } catch (const std::out_of_range&) {
+ throw_bitdeq = true;
+ }
+ assert(throw_deq == throw_bitdeq);
+ assert(throw_bitdeq == (pos >= cdeq.size()));
+ if (!throw_deq) assert(val_deq == val_bitdeq);
+ },
+ [&] {
+ // operator[]
+ if (!cdeq.empty()) {
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
+ assert(deq[pos] == bitdeq[pos]);
+ if (ctx.randbool()) {
+ deq[pos] = !deq[pos];
+ bitdeq[pos].flip();
+ }
+ }
+ },
+ [&] {
+ // operator[] const
+ if (!cdeq.empty()) {
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
+ assert(deq[pos] == bitdeq[pos]);
+ }
+ },
+ [&] {
+ // front()
+ if (!cdeq.empty()) {
+ auto& ref = deq.front();
+ auto bitref = bitdeq.front();
+ assert(ref == bitref);
+ if (ctx.randbool()) {
+ ref = !ref;
+ bitref = !bitref;
+ }
+ }
+ },
+ [&] {
+ // front() const
+ if (!cdeq.empty()) {
+ auto& ref = cdeq.front();
+ auto bitref = cbitdeq.front();
+ assert(ref == bitref);
+ }
+ },
+ [&] {
+ // back() and swap(bool, ref)
+ if (!cdeq.empty()) {
+ auto& ref = deq.back();
+ auto bitref = bitdeq.back();
+ assert(ref == bitref);
+ if (ctx.randbool()) {
+ ref = !ref;
+ bitref.flip();
+ }
+ }
+ },
+ [&] {
+ // back() const
+ if (!cdeq.empty()) {
+ const auto& cdeq = deq;
+ const auto& cbitdeq = bitdeq;
+ auto& ref = cdeq.back();
+ auto bitref = cbitdeq.back();
+ assert(ref == bitref);
+ }
+ },
+ [&] {
+ // push_back()
+ if (cdeq.size() < limitlen) {
+ bool val = ctx.randbool();
+ if (cdeq.empty()) {
+ deq.push_back(val);
+ bitdeq.push_back(val);
+ } else {
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
+ auto& ref = deq[pos];
+ auto bitref = bitdeq[pos];
+ assert(ref == bitref);
+ deq.push_back(val);
+ bitdeq.push_back(val);
+ assert(ref == bitref); // references are not invalidated
+ }
+ }
+ },
+ [&] {
+ // push_front()
+ if (cdeq.size() < limitlen) {
+ bool val = ctx.randbool();
+ if (cdeq.empty()) {
+ deq.push_front(val);
+ bitdeq.push_front(val);
+ } else {
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
+ auto& ref = deq[pos];
+ auto bitref = bitdeq[pos];
+ assert(ref == bitref);
+ deq.push_front(val);
+ bitdeq.push_front(val);
+ assert(ref == bitref); // references are not invalidated
+ }
+ }
+ },
+ [&] {
+ // pop_back()
+ if (!cdeq.empty()) {
+ if (cdeq.size() == 1) {
+ deq.pop_back();
+ bitdeq.pop_back();
+ } else {
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 2);
+ auto& ref = deq[pos];
+ auto bitref = bitdeq[pos];
+ assert(ref == bitref);
+ deq.pop_back();
+ bitdeq.pop_back();
+ assert(ref == bitref); // references to other elements are not invalidated
+ }
+ }
+ },
+ [&] {
+ // pop_front()
+ if (!cdeq.empty()) {
+ if (cdeq.size() == 1) {
+ deq.pop_front();
+ bitdeq.pop_front();
+ } else {
+ size_t pos = provider.ConsumeIntegralInRange<size_t>(1, cdeq.size() - 1);
+ auto& ref = deq[pos];
+ auto bitref = bitdeq[pos];
+ assert(ref == bitref);
+ deq.pop_front();
+ bitdeq.pop_front();
+ assert(ref == bitref); // references to other elements are not invalidated
+ }
+ }
+ },
+ [&] {
+ // erase (in middle, single)
+ if (!cdeq.empty()) {
+ size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
+ size_t after = cdeq.size() - 1 - before;
+ auto it = deq.erase(cdeq.begin() + before);
+ auto bitit = bitdeq.erase(cbitdeq.begin() + before);
+ assert(it == cdeq.begin() + before && it == cdeq.end() - after);
+ assert(bitit == cbitdeq.begin() + before && bitit == cbitdeq.end() - after);
+ }
+ },
+ [&] {
+ // erase (at front, range)
+ size_t count = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
+ auto it = deq.erase(cdeq.begin(), cdeq.begin() + count);
+ auto bitit = bitdeq.erase(cbitdeq.begin(), cbitdeq.begin() + count);
+ assert(it == deq.begin());
+ assert(bitit == bitdeq.begin());
+ },
+ [&] {
+ // erase (at back, range)
+ size_t count = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
+ auto it = deq.erase(cdeq.end() - count, cdeq.end());
+ auto bitit = bitdeq.erase(cbitdeq.end() - count, cbitdeq.end());
+ assert(it == deq.end());
+ assert(bitit == bitdeq.end());
+ },
+ [&] {
+ // erase (in middle, range)
+ size_t count = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
+ size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - count);
+ size_t after = cdeq.size() - count - before;
+ auto it = deq.erase(cdeq.begin() + before, cdeq.end() - after);
+ auto bitit = bitdeq.erase(cbitdeq.begin() + before, cbitdeq.end() - after);
+ assert(it == cdeq.begin() + before && it == cdeq.end() - after);
+ assert(bitit == cbitdeq.begin() + before && bitit == cbitdeq.end() - after);
+ },
+ [&] {
+ // insert/emplace (in middle, single)
+ if (cdeq.size() < limitlen) {
+ size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
+ bool val = ctx.randbool();
+ bool do_emplace = provider.ConsumeBool();
+ auto it = deq.insert(cdeq.begin() + before, val);
+ auto bitit = do_emplace ? bitdeq.emplace(cbitdeq.begin() + before, val)
+ : bitdeq.insert(cbitdeq.begin() + before, val);
+ assert(it == deq.begin() + before);
+ assert(bitit == bitdeq.begin() + before);
+ }
+ },
+ [&] {
+ // insert (at front, begin/end)
+ if (cdeq.size() < limitlen) {
+ size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
+ auto rand_end = rand_begin + count;
+ auto it = deq.insert(cdeq.begin(), rand_begin, rand_end);
+ auto bitit = bitdeq.insert(cbitdeq.begin(), rand_begin, rand_end);
+ assert(it == cdeq.begin());
+ assert(bitit == cbitdeq.begin());
+ }
+ },
+ [&] {
+ // insert (at back, begin/end)
+ if (cdeq.size() < limitlen) {
+ size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
+ auto rand_end = rand_begin + count;
+ auto it = deq.insert(cdeq.end(), rand_begin, rand_end);
+ auto bitit = bitdeq.insert(cbitdeq.end(), rand_begin, rand_end);
+ assert(it == cdeq.end() - count);
+ assert(bitit == cbitdeq.end() - count);
+ }
+ },
+ [&] {
+ // insert (in middle, range)
+ if (cdeq.size() < limitlen) {
+ size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
+ bool val = ctx.randbool();
+ auto it = deq.insert(cdeq.begin() + before, count, val);
+ auto bitit = bitdeq.insert(cbitdeq.begin() + before, count, val);
+ assert(it == deq.begin() + before);
+ assert(bitit == bitdeq.begin() + before);
+ }
+ },
+ [&] {
+ // insert (in middle, begin/end)
+ if (cdeq.size() < limitlen) {
+ size_t count = provider.ConsumeIntegralInRange<size_t>(0, maxlen);
+ size_t before = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size());
+ auto rand_begin = RANDDATA.begin() + ctx.randbits(RANDDATA_BITS);
+ auto rand_end = rand_begin + count;
+ auto it = deq.insert(cdeq.begin() + before, rand_begin, rand_end);
+ auto bitit = bitdeq.insert(cbitdeq.begin() + before, rand_begin, rand_end);
+ assert(it == deq.begin() + before);
+ assert(bitit == bitdeq.begin() + before);
+ }
+ }
+ );
+ }
+
+}
diff --git a/src/test/fuzz/pow.cpp b/src/test/fuzz/pow.cpp
index 0004d82d66..507ce57ec0 100644
--- a/src/test/fuzz/pow.cpp
+++ b/src/test/fuzz/pow.cpp
@@ -83,3 +83,40 @@ FUZZ_TARGET_INIT(pow, initialize_pow)
}
}
}
+
+
+FUZZ_TARGET_INIT(pow_transition, initialize_pow)
+{
+ FuzzedDataProvider fuzzed_data_provider(buffer.data(), buffer.size());
+ const Consensus::Params& consensus_params{Params().GetConsensus()};
+ std::vector<std::unique_ptr<CBlockIndex>> blocks;
+
+ const uint32_t old_time{fuzzed_data_provider.ConsumeIntegral<uint32_t>()};
+ const uint32_t new_time{fuzzed_data_provider.ConsumeIntegral<uint32_t>()};
+ const int32_t version{fuzzed_data_provider.ConsumeIntegral<int32_t>()};
+ uint32_t nbits{fuzzed_data_provider.ConsumeIntegral<uint32_t>()};
+
+ const arith_uint256 pow_limit = UintToArith256(consensus_params.powLimit);
+ arith_uint256 old_target;
+ old_target.SetCompact(nbits);
+ if (old_target > pow_limit) {
+ nbits = pow_limit.GetCompact();
+ }
+ // Create one difficulty adjustment period worth of headers
+ for (int height = 0; height < consensus_params.DifficultyAdjustmentInterval(); ++height) {
+ CBlockHeader header;
+ header.nVersion = version;
+ header.nTime = old_time;
+ header.nBits = nbits;
+ if (height == consensus_params.DifficultyAdjustmentInterval() - 1) {
+ header.nTime = new_time;
+ }
+ auto current_block{std::make_unique<CBlockIndex>(header)};
+ current_block->pprev = blocks.empty() ? nullptr : blocks.back().get();
+ current_block->nHeight = height;
+ blocks.emplace_back(std::move(current_block)).get();
+ }
+ auto last_block{blocks.back().get()};
+ unsigned int new_nbits{GetNextWorkRequired(last_block, nullptr, consensus_params)};
+ Assert(PermittedDifficultyTransition(consensus_params, last_block->nHeight + 1, last_block->nBits, new_nbits));
+}
diff --git a/src/test/fuzz/utxo_snapshot.cpp b/src/test/fuzz/utxo_snapshot.cpp
index 0b596492be..8abb943266 100644
--- a/src/test/fuzz/utxo_snapshot.cpp
+++ b/src/test/fuzz/utxo_snapshot.cpp
@@ -58,7 +58,7 @@ FUZZ_TARGET_INIT(utxo_snapshot, initialize_chain)
if (fuzzed_data_provider.ConsumeBool()) {
for (const auto& block : *g_chain) {
BlockValidationState dummy;
- bool processed{chainman.ProcessNewBlockHeaders({*block}, dummy)};
+ bool processed{chainman.ProcessNewBlockHeaders({*block}, true, dummy)};
Assert(processed);
const auto* index{WITH_LOCK(::cs_main, return chainman.m_blockman.LookupBlockIndex(block->GetHash()))};
Assert(index);
diff --git a/src/test/headers_sync_chainwork_tests.cpp b/src/test/headers_sync_chainwork_tests.cpp
new file mode 100644
index 0000000000..41241ebee2
--- /dev/null
+++ b/src/test/headers_sync_chainwork_tests.cpp
@@ -0,0 +1,146 @@
+// Copyright (c) 2022 The Bitcoin Core developers
+// Distributed under the MIT software license, see the accompanying
+// file COPYING or http://www.opensource.org/licenses/mit-license.php.
+
+#include <chain.h>
+#include <chainparams.h>
+#include <consensus/params.h>
+#include <headerssync.h>
+#include <pow.h>
+#include <test/util/setup_common.h>
+#include <validation.h>
+#include <vector>
+
+#include <boost/test/unit_test.hpp>
+
+struct HeadersGeneratorSetup : public RegTestingSetup {
+ /** Search for a nonce to meet (regtest) proof of work */
+ void FindProofOfWork(CBlockHeader& starting_header);
+ /**
+ * Generate headers in a chain that build off a given starting hash, using
+ * the given nVersion, advancing time by 1 second from the starting
+ * prev_time, and with a fixed merkle root hash.
+ */
+ void GenerateHeaders(std::vector<CBlockHeader>& headers, size_t count,
+ const uint256& starting_hash, const int nVersion, int prev_time,
+ const uint256& merkle_root, const uint32_t nBits);
+};
+
+void HeadersGeneratorSetup::FindProofOfWork(CBlockHeader& starting_header)
+{
+ while (!CheckProofOfWork(starting_header.GetHash(), starting_header.nBits, Params().GetConsensus())) {
+ ++(starting_header.nNonce);
+ }
+}
+
+void HeadersGeneratorSetup::GenerateHeaders(std::vector<CBlockHeader>& headers,
+ size_t count, const uint256& starting_hash, const int nVersion, int prev_time,
+ const uint256& merkle_root, const uint32_t nBits)
+{
+ uint256 prev_hash = starting_hash;
+
+ while (headers.size() < count) {
+ headers.push_back(CBlockHeader());
+ CBlockHeader& next_header = headers.back();;
+ next_header.nVersion = nVersion;
+ next_header.hashPrevBlock = prev_hash;
+ next_header.hashMerkleRoot = merkle_root;
+ next_header.nTime = prev_time+1;
+ next_header.nBits = nBits;
+
+ FindProofOfWork(next_header);
+ prev_hash = next_header.GetHash();
+ prev_time = next_header.nTime;
+ }
+ return;
+}
+
+BOOST_FIXTURE_TEST_SUITE(headers_sync_chainwork_tests, HeadersGeneratorSetup)
+
+// In this test, we construct two sets of headers from genesis, one with
+// sufficient proof of work and one without.
+// 1. We deliver the first set of headers and verify that the headers sync state
+// updates to the REDOWNLOAD phase successfully.
+// 2. Then we deliver the second set of headers and verify that they fail
+// processing (presumably due to commitments not matching).
+// 3. Finally, we verify that repeating with the first set of headers in both
+// phases is successful.
+BOOST_AUTO_TEST_CASE(headers_sync_state)
+{
+ std::vector<CBlockHeader> first_chain;
+ std::vector<CBlockHeader> second_chain;
+
+ std::unique_ptr<HeadersSyncState> hss;
+
+ const int target_blocks = 15000;
+ arith_uint256 chain_work = target_blocks*2;
+
+ // Generate headers for two different chains (using differing merkle roots
+ // to ensure the headers are different).
+ GenerateHeaders(first_chain, target_blocks-1, Params().GenesisBlock().GetHash(),
+ Params().GenesisBlock().nVersion, Params().GenesisBlock().nTime,
+ ArithToUint256(0), Params().GenesisBlock().nBits);
+
+ GenerateHeaders(second_chain, target_blocks-2, Params().GenesisBlock().GetHash(),
+ Params().GenesisBlock().nVersion, Params().GenesisBlock().nTime,
+ ArithToUint256(1), Params().GenesisBlock().nBits);
+
+ const CBlockIndex* chain_start = WITH_LOCK(::cs_main, return m_node.chainman->m_blockman.LookupBlockIndex(Params().GenesisBlock().GetHash()));
+ std::vector<CBlockHeader> headers_batch;
+
+ // Feed the first chain to HeadersSyncState, by delivering 1 header
+ // initially and then the rest.
+ headers_batch.insert(headers_batch.end(), std::next(first_chain.begin()), first_chain.end());
+
+ hss.reset(new HeadersSyncState(0, Params().GetConsensus(), chain_start, chain_work));
+ (void)hss->ProcessNextHeaders({first_chain.front()}, true);
+ // Pretend the first header is still "full", so we don't abort.
+ auto result = hss->ProcessNextHeaders(headers_batch, true);
+
+ // This chain should look valid, and we should have met the proof-of-work
+ // requirement.
+ BOOST_CHECK(result.success);
+ BOOST_CHECK(result.request_more);
+ BOOST_CHECK(hss->GetState() == HeadersSyncState::State::REDOWNLOAD);
+
+ // Try to sneakily feed back the second chain.
+ result = hss->ProcessNextHeaders(second_chain, true);
+ BOOST_CHECK(!result.success); // foiled!
+ BOOST_CHECK(hss->GetState() == HeadersSyncState::State::FINAL);
+
+ // Now try again, this time feeding the first chain twice.
+ hss.reset(new HeadersSyncState(0, Params().GetConsensus(), chain_start, chain_work));
+ (void)hss->ProcessNextHeaders(first_chain, true);
+ BOOST_CHECK(hss->GetState() == HeadersSyncState::State::REDOWNLOAD);
+
+ result = hss->ProcessNextHeaders(first_chain, true);
+ BOOST_CHECK(result.success);
+ BOOST_CHECK(!result.request_more);
+ // All headers should be ready for acceptance:
+ BOOST_CHECK(result.pow_validated_headers.size() == first_chain.size());
+ // Nothing left for the sync logic to do:
+ BOOST_CHECK(hss->GetState() == HeadersSyncState::State::FINAL);
+
+ // Finally, verify that just trying to process the second chain would not
+ // succeed (too little work)
+ hss.reset(new HeadersSyncState(0, Params().GetConsensus(), chain_start, chain_work));
+ BOOST_CHECK(hss->GetState() == HeadersSyncState::State::PRESYNC);
+ // Pretend just the first message is "full", so we don't abort.
+ (void)hss->ProcessNextHeaders({second_chain.front()}, true);
+ BOOST_CHECK(hss->GetState() == HeadersSyncState::State::PRESYNC);
+
+ headers_batch.clear();
+ headers_batch.insert(headers_batch.end(), std::next(second_chain.begin(), 1), second_chain.end());
+ // Tell the sync logic that the headers message was not full, implying no
+ // more headers can be requested. For a low-work-chain, this should causes
+ // the sync to end with no headers for acceptance.
+ result = hss->ProcessNextHeaders(headers_batch, false);
+ BOOST_CHECK(hss->GetState() == HeadersSyncState::State::FINAL);
+ BOOST_CHECK(result.pow_validated_headers.empty());
+ BOOST_CHECK(!result.request_more);
+ // Nevertheless, no validation errors should have been detected with the
+ // chain:
+ BOOST_CHECK(result.success);
+}
+
+BOOST_AUTO_TEST_SUITE_END()
diff --git a/src/test/miner_tests.cpp b/src/test/miner_tests.cpp
index d88aed7d4a..9f5fb17b60 100644
--- a/src/test/miner_tests.cpp
+++ b/src/test/miner_tests.cpp
@@ -588,7 +588,7 @@ BOOST_AUTO_TEST_CASE(CreateNewBlock_validity)
pblock->nNonce = bi.nonce;
}
std::shared_ptr<const CBlock> shared_pblock = std::make_shared<const CBlock>(*pblock);
- BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, nullptr));
+ BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, true, nullptr));
pblock->hashPrevBlock = pblock->GetHash();
}
diff --git a/src/test/pow_tests.cpp b/src/test/pow_tests.cpp
index 2f43ae52f7..3695ea9d16 100644
--- a/src/test/pow_tests.cpp
+++ b/src/test/pow_tests.cpp
@@ -20,7 +20,14 @@ BOOST_AUTO_TEST_CASE(get_next_work)
pindexLast.nHeight = 32255;
pindexLast.nTime = 1262152739; // Block #32255
pindexLast.nBits = 0x1d00ffff;
- BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1d00d86aU);
+
+ // Here (and below): expected_nbits is calculated in
+ // CalculateNextWorkRequired(); redoing the calculation here would be just
+ // reimplementing the same code that is written in pow.cpp. Rather than
+ // copy that code, we just hardcode the expected result.
+ unsigned int expected_nbits = 0x1d00d86aU;
+ BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
+ BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
}
/* Test the constraint on the upper bound for next work */
@@ -32,7 +39,9 @@ BOOST_AUTO_TEST_CASE(get_next_work_pow_limit)
pindexLast.nHeight = 2015;
pindexLast.nTime = 1233061996; // Block #2015
pindexLast.nBits = 0x1d00ffff;
- BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1d00ffffU);
+ unsigned int expected_nbits = 0x1d00ffffU;
+ BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
+ BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
}
/* Test the constraint on the lower bound for actual time taken */
@@ -44,7 +53,12 @@ BOOST_AUTO_TEST_CASE(get_next_work_lower_limit_actual)
pindexLast.nHeight = 68543;
pindexLast.nTime = 1279297671; // Block #68543
pindexLast.nBits = 0x1c05a3f4;
- BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1c0168fdU);
+ unsigned int expected_nbits = 0x1c0168fdU;
+ BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
+ BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
+ // Test that reducing nbits further would not be a PermittedDifficultyTransition.
+ unsigned int invalid_nbits = expected_nbits-1;
+ BOOST_CHECK(!PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, invalid_nbits));
}
/* Test the constraint on the upper bound for actual time taken */
@@ -56,7 +70,12 @@ BOOST_AUTO_TEST_CASE(get_next_work_upper_limit_actual)
pindexLast.nHeight = 46367;
pindexLast.nTime = 1269211443; // Block #46367
pindexLast.nBits = 0x1c387f6f;
- BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), 0x1d00e1fdU);
+ unsigned int expected_nbits = 0x1d00e1fdU;
+ BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, chainParams->GetConsensus()), expected_nbits);
+ BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, expected_nbits));
+ // Test that increasing nbits further would not be a PermittedDifficultyTransition.
+ unsigned int invalid_nbits = expected_nbits+1;
+ BOOST_CHECK(!PermittedDifficultyTransition(chainParams->GetConsensus(), pindexLast.nHeight+1, pindexLast.nBits, invalid_nbits));
}
BOOST_AUTO_TEST_CASE(CheckProofOfWork_test_negative_target)
diff --git a/src/test/skiplist_tests.cpp b/src/test/skiplist_tests.cpp
index 3d3fd5d93d..9f5e3ab7ae 100644
--- a/src/test/skiplist_tests.cpp
+++ b/src/test/skiplist_tests.cpp
@@ -78,7 +78,7 @@ BOOST_AUTO_TEST_CASE(getlocator_test)
for (int n=0; n<100; n++) {
int r = InsecureRandRange(150000);
CBlockIndex* tip = (r < 100000) ? &vBlocksMain[r] : &vBlocksSide[r - 100000];
- CBlockLocator locator = chain.GetLocator(tip);
+ CBlockLocator locator = GetLocator(tip);
// The first result must be the block itself, the last one must be genesis.
BOOST_CHECK(locator.vHave.front() == tip->GetBlockHash());
diff --git a/src/test/util/mining.cpp b/src/test/util/mining.cpp
index 88cf9647e7..faa0b2878c 100644
--- a/src/test/util/mining.cpp
+++ b/src/test/util/mining.cpp
@@ -68,7 +68,7 @@ CTxIn MineBlock(const NodeContext& node, const CScript& coinbase_scriptPubKey)
assert(block->nNonce);
}
- bool processed{Assert(node.chainman)->ProcessNewBlock(block, true, nullptr)};
+ bool processed{Assert(node.chainman)->ProcessNewBlock(block, true, true, nullptr)};
assert(processed);
return CTxIn{block->vtx[0]->GetHash(), 0};
diff --git a/src/test/util/setup_common.cpp b/src/test/util/setup_common.cpp
index 30d26ecf79..8a0b03c9d1 100644
--- a/src/test/util/setup_common.cpp
+++ b/src/test/util/setup_common.cpp
@@ -321,7 +321,7 @@ CBlock TestChain100Setup::CreateAndProcessBlock(
const CBlock block = this->CreateBlock(txns, scriptPubKey, *chainstate);
std::shared_ptr<const CBlock> shared_pblock = std::make_shared<const CBlock>(block);
- Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, nullptr);
+ Assert(m_node.chainman)->ProcessNewBlock(shared_pblock, true, true, nullptr);
return block;
}
diff --git a/src/test/util_tests.cpp b/src/test/util_tests.cpp
index 921cd37327..61ceca9837 100644
--- a/src/test/util_tests.cpp
+++ b/src/test/util_tests.cpp
@@ -23,6 +23,7 @@
#include <util/string.h>
#include <util/time.h>
#include <util/vector.h>
+#include <util/bitdeque.h>
#include <array>
#include <fstream>
diff --git a/src/test/validation_block_tests.cpp b/src/test/validation_block_tests.cpp
index 1c5ca18759..bb1ade153a 100644
--- a/src/test/validation_block_tests.cpp
+++ b/src/test/validation_block_tests.cpp
@@ -100,7 +100,7 @@ std::shared_ptr<CBlock> MinerTestingSetup::FinalizeBlock(std::shared_ptr<CBlock>
// submit block header, so that miner can get the block height from the
// global state and the node has the topology of the chain
BlockValidationState ignored;
- BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlockHeaders({pblock->GetBlockHeader()}, ignored));
+ BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlockHeaders({pblock->GetBlockHeader()}, true, ignored));
return pblock;
}
@@ -157,7 +157,7 @@ BOOST_AUTO_TEST_CASE(processnewblock_signals_ordering)
bool ignored;
// Connect the genesis block and drain any outstanding events
- BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(std::make_shared<CBlock>(Params().GenesisBlock()), true, &ignored));
+ BOOST_CHECK(Assert(m_node.chainman)->ProcessNewBlock(std::make_shared<CBlock>(Params().GenesisBlock()), true, true, &ignored));
SyncWithValidationInterfaceQueue();
// subscribe to events (this subscriber will validate event ordering)
@@ -179,13 +179,13 @@ BOOST_AUTO_TEST_CASE(processnewblock_signals_ordering)
FastRandomContext insecure;
for (int i = 0; i < 1000; i++) {
auto block = blocks[insecure.randrange(blocks.size() - 1)];
- Assert(m_node.chainman)->ProcessNewBlock(block, true, &ignored);
+ Assert(m_node.chainman)->ProcessNewBlock(block, true, true, &ignored);
}
// to make sure that eventually we process the full chain - do it here
for (const auto& block : blocks) {
if (block->vtx.size() == 1) {
- bool processed = Assert(m_node.chainman)->ProcessNewBlock(block, true, &ignored);
+ bool processed = Assert(m_node.chainman)->ProcessNewBlock(block, true, true, &ignored);
assert(processed);
}
}
@@ -224,7 +224,7 @@ BOOST_AUTO_TEST_CASE(mempool_locks_reorg)
{
bool ignored;
auto ProcessBlock = [&](std::shared_ptr<const CBlock> block) -> bool {
- return Assert(m_node.chainman)->ProcessNewBlock(block, /*force_processing=*/true, /*new_block=*/&ignored);
+ return Assert(m_node.chainman)->ProcessNewBlock(block, /*force_processing=*/true, /*min_pow_checked=*/true, /*new_block=*/&ignored);
};
// Process all mined blocks
diff --git a/src/test/validation_chainstate_tests.cpp b/src/test/validation_chainstate_tests.cpp
index ee60f9aa4d..98294b9028 100644
--- a/src/test/validation_chainstate_tests.cpp
+++ b/src/test/validation_chainstate_tests.cpp
@@ -132,7 +132,7 @@ BOOST_FIXTURE_TEST_CASE(chainstate_update_tip, TestChain100Setup)
bool checked = CheckBlock(*pblock, state, chainparams.GetConsensus());
BOOST_CHECK(checked);
bool accepted = background_cs.AcceptBlock(
- pblock, state, &pindex, true, nullptr, &newblock);
+ pblock, state, &pindex, true, nullptr, &newblock, true);
BOOST_CHECK(accepted);
}
// UpdateTip is called here
diff --git a/src/util/bitdeque.h b/src/util/bitdeque.h
new file mode 100644
index 0000000000..1e34b72475
--- /dev/null
+++ b/src/util/bitdeque.h
@@ -0,0 +1,469 @@
+// Copyright (c) 2022 The Bitcoin Core developers
+// Distributed under the MIT software license, see the accompanying
+// file COPYING or http://www.opensource.org/licenses/mit-license.php.
+
+#ifndef BITCOIN_UTIL_BITDEQUE_H
+#define BITCOIN_UTIL_BITDEQUE_H
+
+#include <bitset>
+#include <cstddef>
+#include <deque>
+#include <limits>
+#include <stdexcept>
+#include <tuple>
+
+/** Class that mimics std::deque<bool>, but with std::vector<bool>'s bit packing.
+ *
+ * BlobSize selects the (minimum) number of bits that are allocated at once.
+ * Larger values reduce the asymptotic memory usage overhead, at the cost of
+ * needing larger up-front allocations. The default is 4096 bytes.
+ */
+template<int BlobSize = 4096 * 8>
+class bitdeque
+{
+ // Internal definitions
+ using word_type = std::bitset<BlobSize>;
+ using deque_type = std::deque<word_type>;
+ static_assert(BlobSize > 0);
+ static constexpr int BITS_PER_WORD = BlobSize;
+
+ // Forward and friend declarations of iterator types.
+ template<bool Const> class Iterator;
+ template<bool Const> friend class Iterator;
+
+ /** Iterator to a bitdeque element, const or not. */
+ template<bool Const>
+ class Iterator
+ {
+ using deque_iterator = std::conditional_t<Const, typename deque_type::const_iterator, typename deque_type::iterator>;
+
+ deque_iterator m_it;
+ int m_bitpos{0};
+ Iterator(const deque_iterator& it, int bitpos) : m_it(it), m_bitpos(bitpos) {}
+ friend class bitdeque;
+
+ public:
+ using iterator_category = std::random_access_iterator_tag;
+ using value_type = bool;
+ using pointer = void;
+ using const_pointer = void;
+ using reference = std::conditional_t<Const, bool, typename word_type::reference>;
+ using const_reference = bool;
+ using difference_type = std::ptrdiff_t;
+
+ /** Default constructor. */
+ Iterator() = default;
+
+ /** Default copy constructor. */
+ Iterator(const Iterator&) = default;
+
+ /** Conversion from non-const to const iterator. */
+ template<bool ConstArg = Const, typename = std::enable_if_t<Const && ConstArg>>
+ Iterator(const Iterator<false>& x) : m_it(x.m_it), m_bitpos(x.m_bitpos) {}
+
+ Iterator& operator+=(difference_type dist)
+ {
+ if (dist > 0) {
+ if (dist + m_bitpos >= BITS_PER_WORD) {
+ ++m_it;
+ dist -= BITS_PER_WORD - m_bitpos;
+ m_bitpos = 0;
+ }
+ auto jump = dist / BITS_PER_WORD;
+ m_it += jump;
+ m_bitpos += dist - jump * BITS_PER_WORD;
+ } else if (dist < 0) {
+ dist = -dist;
+ if (dist > m_bitpos) {
+ --m_it;
+ dist -= m_bitpos + 1;
+ m_bitpos = BITS_PER_WORD - 1;
+ }
+ auto jump = dist / BITS_PER_WORD;
+ m_it -= jump;
+ m_bitpos -= dist - jump * BITS_PER_WORD;
+ }
+ return *this;
+ }
+
+ friend difference_type operator-(const Iterator& x, const Iterator& y)
+ {
+ return BITS_PER_WORD * (x.m_it - y.m_it) + x.m_bitpos - y.m_bitpos;
+ }
+
+ Iterator& operator=(const Iterator&) = default;
+ Iterator& operator-=(difference_type dist) { return operator+=(-dist); }
+ Iterator& operator++() { ++m_bitpos; if (m_bitpos == BITS_PER_WORD) { m_bitpos = 0; ++m_it; }; return *this; }
+ Iterator operator++(int) { auto ret{*this}; operator++(); return ret; }
+ Iterator& operator--() { if (m_bitpos == 0) { m_bitpos = BITS_PER_WORD; --m_it; }; --m_bitpos; return *this; }
+ Iterator operator--(int) { auto ret{*this}; operator--(); return ret; }
+ friend Iterator operator+(Iterator x, difference_type dist) { x += dist; return x; }
+ friend Iterator operator+(difference_type dist, Iterator x) { x += dist; return x; }
+ friend Iterator operator-(Iterator x, difference_type dist) { x -= dist; return x; }
+ friend bool operator<(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) < std::tie(y.m_it, y.m_bitpos); }
+ friend bool operator>(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) > std::tie(y.m_it, y.m_bitpos); }
+ friend bool operator<=(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) <= std::tie(y.m_it, y.m_bitpos); }
+ friend bool operator>=(const Iterator& x, const Iterator& y) { return std::tie(x.m_it, x.m_bitpos) >= std::tie(y.m_it, y.m_bitpos); }
+ friend bool operator==(const Iterator& x, const Iterator& y) { return x.m_it == y.m_it && x.m_bitpos == y.m_bitpos; }
+ friend bool operator!=(const Iterator& x, const Iterator& y) { return x.m_it != y.m_it || x.m_bitpos != y.m_bitpos; }
+ reference operator*() const { return (*m_it)[m_bitpos]; }
+ reference operator[](difference_type pos) const { return *(*this + pos); }
+ };
+
+public:
+ using value_type = bool;
+ using size_type = std::size_t;
+ using difference_type = typename deque_type::difference_type;
+ using reference = typename word_type::reference;
+ using const_reference = bool;
+ using iterator = Iterator<false>;
+ using const_iterator = Iterator<true>;
+ using pointer = void;
+ using const_pointer = void;
+ using reverse_iterator = std::reverse_iterator<iterator>;
+ using const_reverse_iterator = std::reverse_iterator<const_iterator>;
+
+private:
+ /** Deque of bitsets storing the actual bit data. */
+ deque_type m_deque;
+
+ /** Number of unused bits at the front of m_deque.front(). */
+ int m_pad_begin;
+
+ /** Number of unused bits at the back of m_deque.back(). */
+ int m_pad_end;
+
+ /** Shrink the container by n bits, removing from the end. */
+ void erase_back(size_type n)
+ {
+ if (n >= static_cast<size_type>(BITS_PER_WORD - m_pad_end)) {
+ n -= BITS_PER_WORD - m_pad_end;
+ m_pad_end = 0;
+ m_deque.erase(m_deque.end() - 1 - (n / BITS_PER_WORD), m_deque.end());
+ n %= BITS_PER_WORD;
+ }
+ if (n) {
+ auto& last = m_deque.back();
+ while (n) {
+ last.reset(BITS_PER_WORD - 1 - m_pad_end);
+ ++m_pad_end;
+ --n;
+ }
+ }
+ }
+
+ /** Extend the container by n bits, adding at the end. */
+ void extend_back(size_type n)
+ {
+ if (n > static_cast<size_type>(m_pad_end)) {
+ n -= m_pad_end + 1;
+ m_pad_end = BITS_PER_WORD - 1;
+ m_deque.insert(m_deque.end(), 1 + (n / BITS_PER_WORD), {});
+ n %= BITS_PER_WORD;
+ }
+ m_pad_end -= n;
+ }
+
+ /** Shrink the container by n bits, removing from the beginning. */
+ void erase_front(size_type n)
+ {
+ if (n >= static_cast<size_type>(BITS_PER_WORD - m_pad_begin)) {
+ n -= BITS_PER_WORD - m_pad_begin;
+ m_pad_begin = 0;
+ m_deque.erase(m_deque.begin(), m_deque.begin() + 1 + (n / BITS_PER_WORD));
+ n %= BITS_PER_WORD;
+ }
+ if (n) {
+ auto& first = m_deque.front();
+ while (n) {
+ first.reset(m_pad_begin);
+ ++m_pad_begin;
+ --n;
+ }
+ }
+ }
+
+ /** Extend the container by n bits, adding at the beginning. */
+ void extend_front(size_type n)
+ {
+ if (n > static_cast<size_type>(m_pad_begin)) {
+ n -= m_pad_begin + 1;
+ m_pad_begin = BITS_PER_WORD - 1;
+ m_deque.insert(m_deque.begin(), 1 + (n / BITS_PER_WORD), {});
+ n %= BITS_PER_WORD;
+ }
+ m_pad_begin -= n;
+ }
+
+ /** Insert a sequence of falses anywhere in the container. */
+ void insert_zeroes(size_type before, size_type count)
+ {
+ size_type after = size() - before;
+ if (before < after) {
+ extend_front(count);
+ std::move(begin() + count, begin() + count + before, begin());
+ } else {
+ extend_back(count);
+ std::move_backward(begin() + before, begin() + before + after, end());
+ }
+ }
+
+public:
+ /** Construct an empty container. */
+ explicit bitdeque() : m_pad_begin{0}, m_pad_end{0} {}
+
+ /** Set the container equal to count times the value of val. */
+ void assign(size_type count, bool val)
+ {
+ m_deque.clear();
+ m_deque.resize((count + BITS_PER_WORD - 1) / BITS_PER_WORD);
+ m_pad_begin = 0;
+ m_pad_end = 0;
+ if (val) {
+ for (auto& elem : m_deque) elem.flip();
+ }
+ if (count % BITS_PER_WORD) {
+ erase_back(BITS_PER_WORD - (count % BITS_PER_WORD));
+ }
+ }
+
+ /** Construct a container containing count times the value of val. */
+ bitdeque(size_type count, bool val) { assign(count, val); }
+
+ /** Construct a container containing count false values. */
+ explicit bitdeque(size_t count) { assign(count, false); }
+
+ /** Copy constructor. */
+ bitdeque(const bitdeque&) = default;
+
+ /** Move constructor. */
+ bitdeque(bitdeque&&) noexcept = default;
+
+ /** Copy assignment operator. */
+ bitdeque& operator=(const bitdeque& other) = default;
+
+ /** Move assignment operator. */
+ bitdeque& operator=(bitdeque&& other) noexcept = default;
+
+ // Iterator functions.
+ iterator begin() noexcept { return {m_deque.begin(), m_pad_begin}; }
+ iterator end() noexcept { return iterator{m_deque.end(), 0} - m_pad_end; }
+ const_iterator begin() const noexcept { return const_iterator{m_deque.cbegin(), m_pad_begin}; }
+ const_iterator cbegin() const noexcept { return const_iterator{m_deque.cbegin(), m_pad_begin}; }
+ const_iterator end() const noexcept { return const_iterator{m_deque.cend(), 0} - m_pad_end; }
+ const_iterator cend() const noexcept { return const_iterator{m_deque.cend(), 0} - m_pad_end; }
+ reverse_iterator rbegin() noexcept { return reverse_iterator{end()}; }
+ reverse_iterator rend() noexcept { return reverse_iterator{begin()}; }
+ const_reverse_iterator rbegin() const noexcept { return const_reverse_iterator{cend()}; }
+ const_reverse_iterator crbegin() const noexcept { return const_reverse_iterator{cend()}; }
+ const_reverse_iterator rend() const noexcept { return const_reverse_iterator{cbegin()}; }
+ const_reverse_iterator crend() const noexcept { return const_reverse_iterator{cbegin()}; }
+
+ /** Count the number of bits in the container. */
+ size_type size() const noexcept { return m_deque.size() * BITS_PER_WORD - m_pad_begin - m_pad_end; }
+
+ /** Determine whether the container is empty. */
+ bool empty() const noexcept
+ {
+ return m_deque.size() == 0 || (m_deque.size() == 1 && (m_pad_begin + m_pad_end == BITS_PER_WORD));
+ }
+
+ /** Return the maximum size of the container. */
+ size_type max_size() const noexcept
+ {
+ if (m_deque.max_size() < std::numeric_limits<difference_type>::max() / BITS_PER_WORD) {
+ return m_deque.max_size() * BITS_PER_WORD;
+ } else {
+ return std::numeric_limits<difference_type>::max();
+ }
+ }
+
+ /** Set the container equal to the bits in [first,last). */
+ template<typename It>
+ void assign(It first, It last)
+ {
+ size_type count = std::distance(first, last);
+ assign(count, false);
+ auto it = begin();
+ while (first != last) {
+ *(it++) = *(first++);
+ }
+ }
+
+ /** Set the container equal to the bits in ilist. */
+ void assign(std::initializer_list<bool> ilist)
+ {
+ assign(ilist.size(), false);
+ auto it = begin();
+ auto init = ilist.begin();
+ while (init != ilist.end()) {
+ *(it++) = *(init++);
+ }
+ }
+
+ /** Set the container equal to the bits in ilist. */
+ bitdeque& operator=(std::initializer_list<bool> ilist)
+ {
+ assign(ilist);
+ return *this;
+ }
+
+ /** Construct a container containing the bits in [first,last). */
+ template<typename It>
+ bitdeque(It first, It last) { assign(first, last); }
+
+ /** Construct a container containing the bits in ilist. */
+ bitdeque(std::initializer_list<bool> ilist) { assign(ilist); }
+
+ // Access an element of the container, with bounds checking.
+ reference at(size_type position)
+ {
+ if (position >= size()) throw std::out_of_range("bitdeque::at() out of range");
+ return begin()[position];
+ }
+ const_reference at(size_type position) const
+ {
+ if (position >= size()) throw std::out_of_range("bitdeque::at() out of range");
+ return cbegin()[position];
+ }
+
+ // Access elements of the container without bounds checking.
+ reference operator[](size_type position) { return begin()[position]; }
+ const_reference operator[](size_type position) const { return cbegin()[position]; }
+ reference front() { return *begin(); }
+ const_reference front() const { return *cbegin(); }
+ reference back() { return end()[-1]; }
+ const_reference back() const { return cend()[-1]; }
+
+ /** Release unused memory. */
+ void shrink_to_fit()
+ {
+ m_deque.shrink_to_fit();
+ }
+
+ /** Empty the container. */
+ void clear() noexcept
+ {
+ m_deque.clear();
+ m_pad_begin = m_pad_end = 0;
+ }
+
+ // Append an element to the container.
+ void push_back(bool val)
+ {
+ extend_back(1);
+ back() = val;
+ }
+ reference emplace_back(bool val)
+ {
+ extend_back(1);
+ auto ref = back();
+ ref = val;
+ return ref;
+ }
+
+ // Prepend an element to the container.
+ void push_front(bool val)
+ {
+ extend_front(1);
+ front() = val;
+ }
+ reference emplace_front(bool val)
+ {
+ extend_front(1);
+ auto ref = front();
+ ref = val;
+ return ref;
+ }
+
+ // Remove the last element from the container.
+ void pop_back()
+ {
+ erase_back(1);
+ }
+
+ // Remove the first element from the container.
+ void pop_front()
+ {
+ erase_front(1);
+ }
+
+ /** Resize the container. */
+ void resize(size_type n)
+ {
+ if (n < size()) {
+ erase_back(size() - n);
+ } else {
+ extend_back(n - size());
+ }
+ }
+
+ // Swap two containers.
+ void swap(bitdeque& other) noexcept
+ {
+ std::swap(m_deque, other.m_deque);
+ std::swap(m_pad_begin, other.m_pad_begin);
+ std::swap(m_pad_end, other.m_pad_end);
+ }
+ friend void swap(bitdeque& b1, bitdeque& b2) noexcept { b1.swap(b2); }
+
+ // Erase elements from the container.
+ iterator erase(const_iterator first, const_iterator last)
+ {
+ size_type before = std::distance(cbegin(), first);
+ size_type dist = std::distance(first, last);
+ size_type after = std::distance(last, cend());
+ if (before < after) {
+ std::move_backward(begin(), begin() + before, end() - after);
+ erase_front(dist);
+ return begin() + before;
+ } else {
+ std::move(end() - after, end(), begin() + before);
+ erase_back(dist);
+ return end() - after;
+ }
+ }
+
+ iterator erase(iterator first, iterator last) { return erase(const_iterator{first}, const_iterator{last}); }
+ iterator erase(const_iterator pos) { return erase(pos, pos + 1); }
+ iterator erase(iterator pos) { return erase(const_iterator{pos}, const_iterator{pos} + 1); }
+
+ // Insert elements into the container.
+ iterator insert(const_iterator pos, bool val)
+ {
+ size_type before = pos - cbegin();
+ insert_zeroes(before, 1);
+ auto it = begin() + before;
+ *it = val;
+ return it;
+ }
+
+ iterator emplace(const_iterator pos, bool val) { return insert(pos, val); }
+
+ iterator insert(const_iterator pos, size_type count, bool val)
+ {
+ size_type before = pos - cbegin();
+ insert_zeroes(before, count);
+ auto it_begin = begin() + before;
+ auto it = it_begin;
+ auto it_end = it + count;
+ while (it != it_end) *(it++) = val;
+ return it_begin;
+ }
+
+ template<typename It>
+ iterator insert(const_iterator pos, It first, It last)
+ {
+ size_type before = pos - cbegin();
+ size_type count = std::distance(first, last);
+ insert_zeroes(before, count);
+ auto it_begin = begin() + before;
+ auto it = it_begin;
+ while (first != last) {
+ *(it++) = *(first++);
+ }
+ return it_begin;
+ }
+};
+
+#endif // BITCOIN_UTIL_BITDEQUE_H
diff --git a/src/validation.cpp b/src/validation.cpp
index 0834dbe5a4..a2a339a5a1 100644
--- a/src/validation.cpp
+++ b/src/validation.cpp
@@ -2944,7 +2944,7 @@ static bool NotifyHeaderTip(CChainState& chainstate) LOCKS_EXCLUDED(cs_main) {
}
// Send block tip changed notifications without cs_main
if (fNotify) {
- uiInterface.NotifyHeaderTip(GetSynchronizationState(fInitialBlockDownload), pindexHeader);
+ uiInterface.NotifyHeaderTip(GetSynchronizationState(fInitialBlockDownload), pindexHeader->nHeight, pindexHeader->nTime, false);
}
return fNotify;
}
@@ -3432,6 +3432,22 @@ std::vector<unsigned char> ChainstateManager::GenerateCoinbaseCommitment(CBlock&
return commitment;
}
+bool HasValidProofOfWork(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams)
+{
+ return std::all_of(headers.cbegin(), headers.cend(),
+ [&](const auto& header) { return CheckProofOfWork(header.GetHash(), header.nBits, consensusParams);});
+}
+
+arith_uint256 CalculateHeadersWork(const std::vector<CBlockHeader>& headers)
+{
+ arith_uint256 total_work{0};
+ for (const CBlockHeader& header : headers) {
+ CBlockIndex dummy(header);
+ total_work += GetBlockProof(dummy);
+ }
+ return total_work;
+}
+
/** Context-dependent validity checks.
* By "context", we mean only the previous block headers, but not the UTXO
* set; UTXO-related validity checks are done in ConnectBlock().
@@ -3572,9 +3588,10 @@ static bool ContextualCheckBlock(const CBlock& block, BlockValidationState& stat
return true;
}
-bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValidationState& state, CBlockIndex** ppindex)
+bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValidationState& state, CBlockIndex** ppindex, bool min_pow_checked)
{
AssertLockHeld(cs_main);
+
// Check for duplicate
uint256 hash = block.GetHash();
BlockMap::iterator miSelf{m_blockman.m_block_index.find(hash)};
@@ -3652,6 +3669,10 @@ bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValida
}
}
}
+ if (!min_pow_checked) {
+ LogPrint(BCLog::VALIDATION, "%s: not adding new block header %s, missing anti-dos proof-of-work validation\n", __func__, hash.ToString());
+ return state.Invalid(BlockValidationResult::BLOCK_HEADER_LOW_WORK, "too-little-chainwork");
+ }
CBlockIndex* pindex{m_blockman.AddToBlockIndex(block, m_best_header)};
if (ppindex)
@@ -3661,14 +3682,14 @@ bool ChainstateManager::AcceptBlockHeader(const CBlockHeader& block, BlockValida
}
// Exposed wrapper for AcceptBlockHeader
-bool ChainstateManager::ProcessNewBlockHeaders(const std::vector<CBlockHeader>& headers, BlockValidationState& state, const CBlockIndex** ppindex)
+bool ChainstateManager::ProcessNewBlockHeaders(const std::vector<CBlockHeader>& headers, bool min_pow_checked, BlockValidationState& state, const CBlockIndex** ppindex)
{
AssertLockNotHeld(cs_main);
{
LOCK(cs_main);
for (const CBlockHeader& header : headers) {
CBlockIndex *pindex = nullptr; // Use a temp pindex instead of ppindex to avoid a const_cast
- bool accepted{AcceptBlockHeader(header, state, &pindex)};
+ bool accepted{AcceptBlockHeader(header, state, &pindex, min_pow_checked)};
ActiveChainstate().CheckBlockIndex();
if (!accepted) {
@@ -3690,8 +3711,33 @@ bool ChainstateManager::ProcessNewBlockHeaders(const std::vector<CBlockHeader>&
return true;
}
+void ChainstateManager::ReportHeadersPresync(const arith_uint256& work, int64_t height, int64_t timestamp)
+{
+ AssertLockNotHeld(cs_main);
+ const auto& chainstate = ActiveChainstate();
+ {
+ LOCK(cs_main);
+ // Don't report headers presync progress if we already have a post-minchainwork header chain.
+ // This means we lose reporting for potentially legimate, but unlikely, deep reorgs, but
+ // prevent attackers that spam low-work headers from filling our logs.
+ if (m_best_header->nChainWork >= UintToArith256(GetConsensus().nMinimumChainWork)) return;
+ // Rate limit headers presync updates to 4 per second, as these are not subject to DoS
+ // protection.
+ auto now = std::chrono::steady_clock::now();
+ if (now < m_last_presync_update + std::chrono::milliseconds{250}) return;
+ m_last_presync_update = now;
+ }
+ bool initial_download = chainstate.IsInitialBlockDownload();
+ uiInterface.NotifyHeaderTip(GetSynchronizationState(initial_download), height, timestamp, /*presync=*/true);
+ if (initial_download) {
+ const int64_t blocks_left{(GetTime() - timestamp) / GetConsensus().nPowTargetSpacing};
+ const double progress{100.0 * height / (height + blocks_left)};
+ LogPrintf("Pre-synchronizing blockheaders, height: %d (~%.2f%%)\n", height, progress);
+ }
+}
+
/** Store block on disk. If dbp is non-nullptr, the file is known to already reside on disk */
-bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock)
+bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock, bool min_pow_checked)
{
const CBlock& block = *pblock;
@@ -3701,7 +3747,7 @@ bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, Block
CBlockIndex *pindexDummy = nullptr;
CBlockIndex *&pindex = ppindex ? *ppindex : pindexDummy;
- bool accepted_header{m_chainman.AcceptBlockHeader(block, state, &pindex)};
+ bool accepted_header{m_chainman.AcceptBlockHeader(block, state, &pindex, min_pow_checked)};
CheckBlockIndex();
if (!accepted_header)
@@ -3774,7 +3820,7 @@ bool CChainState::AcceptBlock(const std::shared_ptr<const CBlock>& pblock, Block
return true;
}
-bool ChainstateManager::ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool* new_block)
+bool ChainstateManager::ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked, bool* new_block)
{
AssertLockNotHeld(cs_main);
@@ -3795,7 +3841,7 @@ bool ChainstateManager::ProcessNewBlock(const std::shared_ptr<const CBlock>& blo
bool ret = CheckBlock(*block, state, GetConsensus());
if (ret) {
// Store to disk
- ret = ActiveChainstate().AcceptBlock(block, state, &pindex, force_processing, nullptr, new_block);
+ ret = ActiveChainstate().AcceptBlock(block, state, &pindex, force_processing, nullptr, new_block, min_pow_checked);
}
if (!ret) {
GetMainSignals().BlockChecked(*block, state);
@@ -4332,7 +4378,7 @@ void CChainState::LoadExternalBlockFile(
const CBlockIndex* pindex = m_blockman.LookupBlockIndex(hash);
if (!pindex || (pindex->nStatus & BLOCK_HAVE_DATA) == 0) {
BlockValidationState state;
- if (AcceptBlock(pblock, state, nullptr, true, dbp, nullptr)) {
+ if (AcceptBlock(pblock, state, nullptr, true, dbp, nullptr, true)) {
nLoaded++;
}
if (state.IsError()) {
@@ -4370,7 +4416,7 @@ void CChainState::LoadExternalBlockFile(
head.ToString());
LOCK(cs_main);
BlockValidationState dummy;
- if (AcceptBlock(pblockrecursive, dummy, nullptr, true, &it->second, nullptr)) {
+ if (AcceptBlock(pblockrecursive, dummy, nullptr, true, &it->second, nullptr, true)) {
nLoaded++;
queue.push_back(pblockrecursive->GetHash());
}
diff --git a/src/validation.h b/src/validation.h
index 19241caeb9..7f5039aaea 100644
--- a/src/validation.h
+++ b/src/validation.h
@@ -340,6 +340,12 @@ bool TestBlockValidity(BlockValidationState& state,
bool fCheckPOW = true,
bool fCheckMerkleRoot = true) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
+/** Check with the proof of work on each blockheader matches the value in nBits */
+bool HasValidProofOfWork(const std::vector<CBlockHeader>& headers, const Consensus::Params& consensusParams);
+
+/** Return the sum of the work on a given set of headers */
+arith_uint256 CalculateHeadersWork(const std::vector<CBlockHeader>& headers);
+
/** RAII wrapper for VerifyDB: Verify consistency of the block and coin databases */
class CVerifyDB {
public:
@@ -650,7 +656,7 @@ public:
EXCLUSIVE_LOCKS_REQUIRED(!m_chainstate_mutex)
LOCKS_EXCLUDED(::cs_main);
- bool AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
+ bool AcceptBlock(const std::shared_ptr<const CBlock>& pblock, BlockValidationState& state, CBlockIndex** ppindex, bool fRequested, const FlatFilePos* dbp, bool* fNewBlock, bool min_pow_checked) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
// Block (dis)connection on a given view:
DisconnectResult DisconnectBlock(const CBlock& block, const CBlockIndex* pindex, CCoinsViewCache& view)
@@ -847,13 +853,20 @@ private:
/**
* If a block header hasn't already been seen, call CheckBlockHeader on it, ensure
* that it doesn't descend from an invalid block, and then add it to m_block_index.
+ * Caller must set min_pow_checked=true in order to add a new header to the
+ * block index (permanent memory storage), indicating that the header is
+ * known to be part of a sufficiently high-work chain (anti-dos check).
*/
bool AcceptBlockHeader(
const CBlockHeader& block,
BlockValidationState& state,
- CBlockIndex** ppindex) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
+ CBlockIndex** ppindex,
+ bool min_pow_checked) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
friend CChainState;
+ /** Most recent headers presync progress update, for rate-limiting. */
+ std::chrono::time_point<std::chrono::steady_clock> m_last_presync_update GUARDED_BY(::cs_main) {};
+
public:
using Options = kernel::ChainstateManagerOpts;
@@ -989,10 +1002,15 @@ public:
*
* @param[in] block The block we want to process.
* @param[in] force_processing Process this block even if unrequested; used for non-network block sources.
+ * @param[in] min_pow_checked True if proof-of-work anti-DoS checks have
+ * been done by caller for headers chain
+ * (note: only affects headers acceptance; if
+ * block header is already present in block
+ * index then this parameter has no effect)
* @param[out] new_block A boolean which is set to indicate if the block was first received via this call
* @returns If the block was processed, independently of block validity
*/
- bool ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool* new_block) LOCKS_EXCLUDED(cs_main);
+ bool ProcessNewBlock(const std::shared_ptr<const CBlock>& block, bool force_processing, bool min_pow_checked, bool* new_block) LOCKS_EXCLUDED(cs_main);
/**
* Process incoming block headers.
@@ -1001,10 +1019,11 @@ public:
* validationinterface callback.
*
* @param[in] block The block headers themselves
+ * @param[in] min_pow_checked True if proof-of-work anti-DoS checks have been done by caller for headers chain
* @param[out] state This may be set to an Error state if any error occurred processing them
* @param[out] ppindex If set, the pointer will be set to point to the last new block index object for the given headers
*/
- bool ProcessNewBlockHeaders(const std::vector<CBlockHeader>& block, BlockValidationState& state, const CBlockIndex** ppindex = nullptr) LOCKS_EXCLUDED(cs_main);
+ bool ProcessNewBlockHeaders(const std::vector<CBlockHeader>& block, bool min_pow_checked, BlockValidationState& state, const CBlockIndex** ppindex = nullptr) LOCKS_EXCLUDED(cs_main);
/**
* Try to add a transaction to the memory pool.
@@ -1028,6 +1047,12 @@ public:
/** Produce the necessary coinbase commitment for a block (modifies the hash, don't call for mined blocks). */
std::vector<unsigned char> GenerateCoinbaseCommitment(CBlock& block, const CBlockIndex* pindexPrev) const;
+ /** This is used by net_processing to report pre-synchronization progress of headers, as
+ * headers are not yet fed to validation during that time, but validation is (for now)
+ * responsible for logging and signalling through NotifyHeaderTip, so it needs this
+ * information. */
+ void ReportHeadersPresync(const arith_uint256& work, int64_t height, int64_t timestamp);
+
~ChainstateManager();
};