aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--configure.ac5
-rw-r--r--depends/README.md1
-rw-r--r--depends/packages/qt.mk1
-rw-r--r--doc/README.md1
-rw-r--r--doc/policy/README.md10
-rw-r--r--doc/policy/packages.md59
-rw-r--r--src/bench/peer_eviction.cpp12
-rw-r--r--src/chain.h130
-rw-r--r--src/net.cpp34
-rw-r--r--src/net.h20
-rw-r--r--src/net_processing.cpp32
-rw-r--r--src/policy/packages.cpp17
-rw-r--r--src/policy/packages.h6
-rw-r--r--src/qt/modaloverlay.cpp2
-rw-r--r--src/qt/rpcconsole.cpp6
-rw-r--r--src/rpc/net.cpp6
-rw-r--r--src/rpc/rawtransaction.cpp2
-rw-r--r--src/test/denialofservice_tests.cpp9
-rw-r--r--src/test/fuzz/node_eviction.cpp6
-rw-r--r--src/test/net_peer_eviction_tests.cpp64
-rw-r--r--src/test/txpackage_tests.cpp213
-rw-r--r--src/test/util/net.cpp6
-rw-r--r--src/validation.cpp242
-rw-r--r--src/validation.h38
-rwxr-xr-xtest/functional/p2p_timeouts.py2
25 files changed, 745 insertions, 179 deletions
diff --git a/configure.ac b/configure.ac
index da71ee93c5..559994818e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -776,10 +776,7 @@ case $host in
*armv7a*)
ANDROID_ARCH=armeabi-v7a
;;
- *i686*)
- ANDROID_ARCH=i686
- ;;
- *) AC_MSG_ERROR([Could not determine Android arch]) ;;
+ *) AC_MSG_ERROR([Could not determine Android arch, or it is unsupported]) ;;
esac
;;
*linux*)
diff --git a/depends/README.md b/depends/README.md
index d7fba754c4..2a39ed3907 100644
--- a/depends/README.md
+++ b/depends/README.md
@@ -38,7 +38,6 @@ Common `host-platform-triplet`s for cross compilation are:
- `s390x-linux-gnu` for Linux S390X
- `armv7a-linux-android` for Android ARM 32 bit
- `aarch64-linux-android` for Android ARM 64 bit
-- `i686-linux-android` for Android x86 32 bit
- `x86_64-linux-android` for Android x86 64 bit
The paths are automatically configured and no other options are needed unless targeting [Android](../doc/build-android.md).
diff --git a/depends/packages/qt.mk b/depends/packages/qt.mk
index 6b2d44dc64..83479b4b06 100644
--- a/depends/packages/qt.mk
+++ b/depends/packages/qt.mk
@@ -181,7 +181,6 @@ $(package)_config_opts_android += -no-feature-vulkan
$(package)_config_opts_aarch64_android += -android-arch arm64-v8a
$(package)_config_opts_armv7a_android += -android-arch armeabi-v7a
$(package)_config_opts_x86_64_android += -android-arch x86_64
-$(package)_config_opts_i686_android += -android-arch i686
endef
define $(package)_fetch_cmds
diff --git a/doc/README.md b/doc/README.md
index f737fd8a86..2800937d30 100644
--- a/doc/README.md
+++ b/doc/README.md
@@ -83,6 +83,7 @@ The Bitcoin repo's [root README](/README.md) contains relevant information on th
- [Reduce Memory](reduce-memory.md)
- [Reduce Traffic](reduce-traffic.md)
- [Tor Support](tor.md)
+- [Transaction Relay Policy](policy/README.md)
- [ZMQ](zmq.md)
License
diff --git a/doc/policy/README.md b/doc/policy/README.md
new file mode 100644
index 0000000000..9c83f4b56e
--- /dev/null
+++ b/doc/policy/README.md
@@ -0,0 +1,10 @@
+# Transaction Relay Policy
+
+Policy is a set of validation rules, in addition to consensus, enforced for unconfirmed
+transactions.
+
+This documentation is not an exhaustive list of all policy rules.
+
+- [Packages](packages.md)
+
+
diff --git a/doc/policy/packages.md b/doc/policy/packages.md
new file mode 100644
index 0000000000..07698f2af2
--- /dev/null
+++ b/doc/policy/packages.md
@@ -0,0 +1,59 @@
+# Package Mempool Accept
+
+## Definitions
+
+A **package** is an ordered list of transactions, representable by a connected Directed Acyclic
+Graph (a directed edge exists between a transaction that spends the output of another transaction).
+
+For every transaction `t` in a **topologically sorted** package, if any of its parents are present
+in the package, they appear somewhere in the list before `t`.
+
+A **child-with-unconfirmed-parents** package is a topologically sorted package that consists of
+exactly one child and all of its unconfirmed parents (no other transactions may be present).
+The last transaction in the package is the child, and its package can be canonically defined based
+on the current state: each of its inputs must be available in the UTXO set as of the current chain
+tip or some preceding transaction in the package.
+
+## Package Mempool Acceptance Rules
+
+The following rules are enforced for all packages:
+
+* Packages cannot exceed `MAX_PACKAGE_COUNT=25` count and `MAX_PACKAGE_SIZE=101KvB` total size
+ (#20833)
+
+ - *Rationale*: This is already enforced as mempool ancestor/descendant limits. If
+ transactions in a package are all related, exceeding this limit would mean that the package
+ can either be split up or it wouldn't pass individual mempool policy.
+
+ - Note that, if these mempool limits change, package limits should be reconsidered. Users may
+ also configure their mempool limits differently.
+
+* Packages must be topologically sorted. (#20833)
+
+* Packages cannot have conflicting transactions, i.e. no two transactions in a package can spend
+ the same inputs. Packages cannot have duplicate transactions. (#20833)
+
+* No transaction in a package can conflict with a mempool transaction. BIP125 Replace By Fee is
+ currently disabled for packages. (#20833)
+
+ - Package RBF may be enabled in the future.
+
+* When packages are evaluated against ancestor/descendant limits, the union of all transactions'
+ descendants and ancestors is considered. (#21800)
+
+ - *Rationale*: This is essentially a "worst case" heuristic intended for packages that are
+ heavily connected, i.e. some transaction in the package is the ancestor or descendant of all
+ the other transactions.
+
+The following rules are only enforced for packages to be submitted to the mempool (not enforced for
+test accepts):
+
+* Packages must be child-with-unconfirmed-parents packages. This also means packages must contain at
+ least 2 transactions. (#22674)
+
+ - *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple parents makes it possible
+ to fee-bump a batch of transactions. Restricting packages to a defined topology is easier to
+ reason about and simplifies the validation logic greatly.
+
+ - Warning: Batched fee-bumping may be unsafe for some use cases. Users and application developers
+ should take caution if utilizing multi-parent packages.
diff --git a/src/bench/peer_eviction.cpp b/src/bench/peer_eviction.cpp
index 8429f18613..f05f5e8f64 100644
--- a/src/bench/peer_eviction.cpp
+++ b/src/bench/peer_eviction.cpp
@@ -42,7 +42,7 @@ static void EvictionProtection0Networks250Candidates(benchmark::Bench& bench)
bench,
250 /* num_candidates */,
[](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_network = NET_IPV4;
});
}
@@ -53,7 +53,7 @@ static void EvictionProtection1Networks250Candidates(benchmark::Bench& bench)
bench,
250 /* num_candidates */,
[](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = false;
if (c.id >= 130 && c.id < 240) { // 110 Tor
c.m_network = NET_ONION;
@@ -69,7 +69,7 @@ static void EvictionProtection2Networks250Candidates(benchmark::Bench& bench)
bench,
250 /* num_candidates */,
[](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = false;
if (c.id >= 90 && c.id < 160) { // 70 Tor
c.m_network = NET_ONION;
@@ -87,7 +87,7 @@ static void EvictionProtection3Networks050Candidates(benchmark::Bench& bench)
bench,
50 /* num_candidates */,
[](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 28 || c.id == 47); // 2 localhost
if (c.id >= 30 && c.id < 47) { // 17 I2P
c.m_network = NET_I2P;
@@ -105,7 +105,7 @@ static void EvictionProtection3Networks100Candidates(benchmark::Bench& bench)
bench,
100 /* num_candidates */,
[](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id >= 55 && c.id < 60); // 5 localhost
if (c.id >= 70 && c.id < 80) { // 10 I2P
c.m_network = NET_I2P;
@@ -123,7 +123,7 @@ static void EvictionProtection3Networks250Candidates(benchmark::Bench& bench)
bench,
250 /* num_candidates */,
[](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id >= 140 && c.id < 160); // 20 localhost
if (c.id >= 170 && c.id < 180) { // 10 I2P
c.m_network = NET_I2P;
diff --git a/src/chain.h b/src/chain.h
index 15ca8f8750..8eafc5d588 100644
--- a/src/chain.h
+++ b/src/chain.h
@@ -59,37 +59,40 @@ public:
READWRITE(VARINT(obj.nTimeLast));
}
- void SetNull() {
- nBlocks = 0;
- nSize = 0;
- nUndoSize = 0;
- nHeightFirst = 0;
- nHeightLast = 0;
- nTimeFirst = 0;
- nTimeLast = 0;
- }
-
- CBlockFileInfo() {
- SetNull();
- }
-
- std::string ToString() const;
-
- /** update statistics (does not update nSize) */
- void AddBlock(unsigned int nHeightIn, uint64_t nTimeIn) {
- if (nBlocks==0 || nHeightFirst > nHeightIn)
- nHeightFirst = nHeightIn;
- if (nBlocks==0 || nTimeFirst > nTimeIn)
- nTimeFirst = nTimeIn;
- nBlocks++;
- if (nHeightIn > nHeightLast)
- nHeightLast = nHeightIn;
- if (nTimeIn > nTimeLast)
- nTimeLast = nTimeIn;
- }
+ void SetNull()
+ {
+ nBlocks = 0;
+ nSize = 0;
+ nUndoSize = 0;
+ nHeightFirst = 0;
+ nHeightLast = 0;
+ nTimeFirst = 0;
+ nTimeLast = 0;
+ }
+
+ CBlockFileInfo()
+ {
+ SetNull();
+ }
+
+ std::string ToString() const;
+
+ /** update statistics (does not update nSize) */
+ void AddBlock(unsigned int nHeightIn, uint64_t nTimeIn)
+ {
+ if (nBlocks == 0 || nHeightFirst > nHeightIn)
+ nHeightFirst = nHeightIn;
+ if (nBlocks == 0 || nTimeFirst > nTimeIn)
+ nTimeFirst = nTimeIn;
+ nBlocks++;
+ if (nHeightIn > nHeightLast)
+ nHeightLast = nHeightIn;
+ if (nTimeIn > nTimeLast)
+ nTimeLast = nTimeIn;
+ }
};
-enum BlockStatus: uint32_t {
+enum BlockStatus : uint32_t {
//! Unused.
BLOCK_VALID_UNKNOWN = 0,
@@ -220,20 +223,22 @@ public:
{
}
- FlatFilePos GetBlockPos() const {
+ FlatFilePos GetBlockPos() const
+ {
FlatFilePos ret;
if (nStatus & BLOCK_HAVE_DATA) {
ret.nFile = nFile;
- ret.nPos = nDataPos;
+ ret.nPos = nDataPos;
}
return ret;
}
- FlatFilePos GetUndoPos() const {
+ FlatFilePos GetUndoPos() const
+ {
FlatFilePos ret;
if (nStatus & BLOCK_HAVE_UNDO) {
ret.nFile = nFile;
- ret.nPos = nUndoPos;
+ ret.nPos = nUndoPos;
}
return ret;
}
@@ -241,13 +246,13 @@ public:
CBlockHeader GetBlockHeader() const
{
CBlockHeader block;
- block.nVersion = nVersion;
+ block.nVersion = nVersion;
if (pprev)
block.hashPrevBlock = pprev->GetBlockHash();
block.hashMerkleRoot = hashMerkleRoot;
- block.nTime = nTime;
- block.nBits = nBits;
- block.nNonce = nNonce;
+ block.nTime = nTime;
+ block.nBits = nBits;
+ block.nNonce = nNonce;
return block;
}
@@ -288,7 +293,7 @@ public:
*(--pbegin) = pindex->GetBlockTime();
std::sort(pbegin, pend);
- return pbegin[(pend - pbegin)/2];
+ return pbegin[(pend - pbegin) / 2];
}
std::string ToString() const
@@ -353,11 +358,13 @@ class CDiskBlockIndex : public CBlockIndex
public:
uint256 hashPrev;
- CDiskBlockIndex() {
+ CDiskBlockIndex()
+ {
hashPrev = uint256();
}
- explicit CDiskBlockIndex(const CBlockIndex* pindex) : CBlockIndex(*pindex) {
+ explicit CDiskBlockIndex(const CBlockIndex* pindex) : CBlockIndex(*pindex)
+ {
hashPrev = (pprev ? pprev->GetBlockHash() : uint256());
}
@@ -385,12 +392,12 @@ public:
uint256 GetBlockHash() const
{
CBlockHeader block;
- block.nVersion = nVersion;
- block.hashPrevBlock = hashPrev;
- block.hashMerkleRoot = hashMerkleRoot;
- block.nTime = nTime;
- block.nBits = nBits;
- block.nNonce = nNonce;
+ block.nVersion = nVersion;
+ block.hashPrevBlock = hashPrev;
+ block.hashMerkleRoot = hashMerkleRoot;
+ block.nTime = nTime;
+ block.nBits = nBits;
+ block.nNonce = nNonce;
return block.GetHash();
}
@@ -407,35 +414,45 @@ public:
};
/** An in-memory indexed chain of blocks. */
-class CChain {
+class CChain
+{
private:
std::vector<CBlockIndex*> vChain;
public:
+ CChain() = default;
+ CChain(const CChain&) = delete;
+ CChain& operator=(const CChain&) = delete;
+
/** Returns the index entry for the genesis block of this chain, or nullptr if none. */
- CBlockIndex *Genesis() const {
+ CBlockIndex* Genesis() const
+ {
return vChain.size() > 0 ? vChain[0] : nullptr;
}
/** Returns the index entry for the tip of this chain, or nullptr if none. */
- CBlockIndex *Tip() const {
+ CBlockIndex* Tip() const
+ {
return vChain.size() > 0 ? vChain[vChain.size() - 1] : nullptr;
}
/** Returns the index entry at a particular height in this chain, or nullptr if no such height exists. */
- CBlockIndex *operator[](int nHeight) const {
+ CBlockIndex* operator[](int nHeight) const
+ {
if (nHeight < 0 || nHeight >= (int)vChain.size())
return nullptr;
return vChain[nHeight];
}
/** Efficiently check whether a block is present in this chain. */
- bool Contains(const CBlockIndex *pindex) const {
+ bool Contains(const CBlockIndex* pindex) const
+ {
return (*this)[pindex->nHeight] == pindex;
}
/** Find the successor of a block in this chain, or nullptr if the given index is not found or is the tip. */
- CBlockIndex *Next(const CBlockIndex *pindex) const {
+ CBlockIndex* Next(const CBlockIndex* pindex) const
+ {
if (Contains(pindex))
return (*this)[pindex->nHeight + 1];
else
@@ -443,18 +460,19 @@ public:
}
/** Return the maximal height in the chain. Is equal to chain.Tip() ? chain.Tip()->nHeight : -1. */
- int Height() const {
+ int Height() const
+ {
return vChain.size() - 1;
}
/** Set/initialize a chain with a given tip. */
- void SetTip(CBlockIndex *pindex);
+ void SetTip(CBlockIndex* pindex);
/** Return a CBlockLocator that refers to a block in this chain (by default the tip). */
- CBlockLocator GetLocator(const CBlockIndex *pindex = nullptr) const;
+ CBlockLocator GetLocator(const CBlockIndex* pindex = nullptr) const;
/** Find the last common block between this chain and a block index entry. */
- const CBlockIndex *FindFork(const CBlockIndex *pindex) const;
+ const CBlockIndex* FindFork(const CBlockIndex* pindex) const;
/** Find the earliest block with timestamp equal or greater than the given time and height equal or greater than the given height. */
CBlockIndex* FindEarliestAtLeast(int64_t nTime, int height) const;
diff --git a/src/net.cpp b/src/net.cpp
index fcebdf35b3..0f49b8ad5a 100644
--- a/src/net.cpp
+++ b/src/net.cpp
@@ -588,9 +588,9 @@ void CNode::CopyStats(CNodeStats& stats)
}
X(m_last_send);
X(m_last_recv);
- X(nLastTXTime);
- X(nLastBlockTime);
- X(nTimeConnected);
+ X(m_last_tx_time);
+ X(m_last_block_time);
+ X(m_connected);
X(nTimeOffset);
X(m_addr_name);
X(nVersion);
@@ -847,7 +847,7 @@ static bool ReverseCompareNodeMinPingTime(const NodeEvictionCandidate &a, const
static bool ReverseCompareNodeTimeConnected(const NodeEvictionCandidate &a, const NodeEvictionCandidate &b)
{
- return a.nTimeConnected > b.nTimeConnected;
+ return a.m_connected > b.m_connected;
}
static bool CompareNetGroupKeyed(const NodeEvictionCandidate &a, const NodeEvictionCandidate &b) {
@@ -857,27 +857,27 @@ static bool CompareNetGroupKeyed(const NodeEvictionCandidate &a, const NodeEvict
static bool CompareNodeBlockTime(const NodeEvictionCandidate &a, const NodeEvictionCandidate &b)
{
// There is a fall-through here because it is common for a node to have many peers which have not yet relayed a block.
- if (a.nLastBlockTime != b.nLastBlockTime) return a.nLastBlockTime < b.nLastBlockTime;
+ if (a.m_last_block_time != b.m_last_block_time) return a.m_last_block_time < b.m_last_block_time;
if (a.fRelevantServices != b.fRelevantServices) return b.fRelevantServices;
- return a.nTimeConnected > b.nTimeConnected;
+ return a.m_connected > b.m_connected;
}
static bool CompareNodeTXTime(const NodeEvictionCandidate &a, const NodeEvictionCandidate &b)
{
// There is a fall-through here because it is common for a node to have more than a few peers that have not yet relayed txn.
- if (a.nLastTXTime != b.nLastTXTime) return a.nLastTXTime < b.nLastTXTime;
+ if (a.m_last_tx_time != b.m_last_tx_time) return a.m_last_tx_time < b.m_last_tx_time;
if (a.fRelayTxes != b.fRelayTxes) return b.fRelayTxes;
if (a.fBloomFilter != b.fBloomFilter) return a.fBloomFilter;
- return a.nTimeConnected > b.nTimeConnected;
+ return a.m_connected > b.m_connected;
}
// Pick out the potential block-relay only peers, and sort them by last block time.
static bool CompareNodeBlockRelayOnlyTime(const NodeEvictionCandidate &a, const NodeEvictionCandidate &b)
{
if (a.fRelayTxes != b.fRelayTxes) return a.fRelayTxes;
- if (a.nLastBlockTime != b.nLastBlockTime) return a.nLastBlockTime < b.nLastBlockTime;
+ if (a.m_last_block_time != b.m_last_block_time) return a.m_last_block_time < b.m_last_block_time;
if (a.fRelevantServices != b.fRelevantServices) return b.fRelevantServices;
- return a.nTimeConnected > b.nTimeConnected;
+ return a.m_connected > b.m_connected;
}
/**
@@ -896,7 +896,7 @@ struct CompareNodeNetworkTime {
{
if (m_is_local && a.m_is_local != b.m_is_local) return b.m_is_local;
if ((a.m_network == m_network) != (b.m_network == m_network)) return b.m_network == m_network;
- return a.nTimeConnected > b.nTimeConnected;
+ return a.m_connected > b.m_connected;
};
};
@@ -1023,12 +1023,12 @@ void ProtectEvictionCandidatesByRatio(std::vector<NodeEvictionCandidate>& evicti
// (vEvictionCandidates is already sorted by reverse connect time)
uint64_t naMostConnections;
unsigned int nMostConnections = 0;
- int64_t nMostConnectionsTime = 0;
+ std::chrono::seconds nMostConnectionsTime{0};
std::map<uint64_t, std::vector<NodeEvictionCandidate> > mapNetGroupNodes;
for (const NodeEvictionCandidate &node : vEvictionCandidates) {
std::vector<NodeEvictionCandidate> &group = mapNetGroupNodes[node.nKeyedNetGroup];
group.push_back(node);
- const int64_t grouptime = group[0].nTimeConnected;
+ const auto grouptime{group[0].m_connected};
if (group.size() > nMostConnections || (group.size() == nMostConnections && grouptime > nMostConnectionsTime)) {
nMostConnections = group.size();
@@ -1072,8 +1072,8 @@ bool CConnman::AttemptToEvictConnection()
peer_relay_txes = node->m_tx_relay->fRelayTxes;
peer_filter_not_null = node->m_tx_relay->pfilter != nullptr;
}
- NodeEvictionCandidate candidate = {node->GetId(), node->nTimeConnected, node->m_min_ping_time,
- node->nLastBlockTime, node->nLastTXTime,
+ NodeEvictionCandidate candidate = {node->GetId(), node->m_connected, node->m_min_ping_time,
+ node->m_last_block_time, node->m_last_tx_time,
HasAllDesirableServiceFlags(node->nServices),
peer_relay_txes, peer_filter_not_null, node->nKeyedNetGroup,
node->m_prefer_evict, node->addr.IsLocal(),
@@ -1322,7 +1322,7 @@ void CConnman::NotifyNumConnectionsChanged()
bool CConnman::ShouldRunInactivityChecks(const CNode& node, std::chrono::seconds now) const
{
- return std::chrono::seconds{node.nTimeConnected} + m_peer_connect_timeout < now;
+ return node.m_connected + m_peer_connect_timeout < now;
}
bool CConnman::InactivityCheck(const CNode& node) const
@@ -2977,7 +2977,7 @@ ServiceFlags CConnman::GetLocalServices() const
unsigned int CConnman::GetReceiveFloodSize() const { return nReceiveFloodSize; }
CNode::CNode(NodeId idIn, ServiceFlags nLocalServicesIn, SOCKET hSocketIn, const CAddress& addrIn, uint64_t nKeyedNetGroupIn, uint64_t nLocalHostNonceIn, const CAddress& addrBindIn, const std::string& addrNameIn, ConnectionType conn_type_in, bool inbound_onion)
- : nTimeConnected(GetTimeSeconds()),
+ : m_connected{GetTime<std::chrono::seconds>()},
addr(addrIn),
addrBind(addrBindIn),
m_addr_name{addrNameIn.empty() ? addr.ToStringIPPort() : addrNameIn},
diff --git a/src/net.h b/src/net.h
index a8f8232f8f..9623a630e7 100644
--- a/src/net.h
+++ b/src/net.h
@@ -243,9 +243,9 @@ public:
bool fRelayTxes;
std::chrono::seconds m_last_send;
std::chrono::seconds m_last_recv;
- int64_t nLastTXTime;
- int64_t nLastBlockTime;
- int64_t nTimeConnected;
+ std::chrono::seconds m_last_tx_time;
+ std::chrono::seconds m_last_block_time;
+ std::chrono::seconds m_connected;
int64_t nTimeOffset;
std::string m_addr_name;
int nVersion;
@@ -422,8 +422,8 @@ public:
std::atomic<std::chrono::seconds> m_last_send{0s};
std::atomic<std::chrono::seconds> m_last_recv{0s};
- //! Unix epoch time at peer connection, in seconds.
- const int64_t nTimeConnected;
+ //! Unix epoch time at peer connection
+ const std::chrono::seconds m_connected;
std::atomic<int64_t> nTimeOffset{0};
// Address of this peer
const CAddress addr;
@@ -562,13 +562,13 @@ public:
* preliminary validity checks and was saved to disk, even if we don't
* connect the block or it eventually fails connection. Used as an inbound
* peer eviction criterium in CConnman::AttemptToEvictConnection. */
- std::atomic<int64_t> nLastBlockTime{0};
+ std::atomic<std::chrono::seconds> m_last_block_time{0s};
/** UNIX epoch time of the last transaction received from this peer that we
* had not yet seen (e.g. not already received from another peer) and that
* was accepted into our mempool. Used as an inbound peer eviction criterium
* in CConnman::AttemptToEvictConnection. */
- std::atomic<int64_t> nLastTXTime{0};
+ std::atomic<std::chrono::seconds> m_last_tx_time{0s};
/** Last measured round-trip time. Used only for RPC/GUI stats/debugging.*/
std::atomic<std::chrono::microseconds> m_last_ping_time{0us};
@@ -1274,10 +1274,10 @@ void CaptureMessage(const CAddress& addr, const std::string& msg_type, const Spa
struct NodeEvictionCandidate
{
NodeId id;
- int64_t nTimeConnected;
+ std::chrono::seconds m_connected;
std::chrono::microseconds m_min_ping_time;
- int64_t nLastBlockTime;
- int64_t nLastTXTime;
+ std::chrono::seconds m_last_block_time;
+ std::chrono::seconds m_last_tx_time;
bool fRelevantServices;
bool fRelayTxes;
bool fBloomFilter;
diff --git a/src/net_processing.cpp b/src/net_processing.cpp
index 8c897b9a0b..2a30afdb5b 100644
--- a/src/net_processing.cpp
+++ b/src/net_processing.cpp
@@ -61,8 +61,8 @@ static constexpr int64_t CHAIN_SYNC_TIMEOUT = 20 * 60; // 20 minutes
static constexpr int64_t STALE_CHECK_INTERVAL = 10 * 60; // 10 minutes
/** How frequently to check for extra outbound peers and disconnect, in seconds */
static constexpr int64_t EXTRA_PEER_CHECK_INTERVAL = 45;
-/** Minimum time an outbound-peer-eviction candidate must be connected for, in order to evict, in seconds */
-static constexpr int64_t MINIMUM_CONNECT_TIME = 30;
+/** Minimum time an outbound-peer-eviction candidate must be connected for, in order to evict */
+static constexpr std::chrono::seconds MINIMUM_CONNECT_TIME{30};
/** SHA256("main address relay")[0:8] */
static constexpr uint64_t RANDOMIZER_ID_ADDRESS_RELAY = 0x3cac0035b5866b90ULL;
/// Age after which a stale block will no longer be served if requested as
@@ -330,7 +330,7 @@ private:
void ConsiderEviction(CNode& pto, int64_t time_in_seconds) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
/** If we have extra outbound peers, try to disconnect the one with the oldest block announcement */
- void EvictExtraOutboundPeers(int64_t time_in_seconds) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
+ void EvictExtraOutboundPeers(std::chrono::seconds now) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
/** Retrieve unbroadcast transactions from the mempool and reattempt sending to peers */
void ReattemptInitialBroadcast(CScheduler& scheduler);
@@ -2511,7 +2511,7 @@ void PeerManagerImpl::ProcessBlock(CNode& node, const std::shared_ptr<const CBlo
bool new_block{false};
m_chainman.ProcessNewBlock(m_chainparams, block, force_processing, &new_block);
if (new_block) {
- node.nLastBlockTime = GetTime();
+ node.m_last_block_time = GetTime<std::chrono::seconds>();
} else {
LOCK(cs_main);
mapBlockSource.erase(block->GetHash());
@@ -3314,7 +3314,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
_RelayTransaction(tx.GetHash(), tx.GetWitnessHash());
m_orphanage.AddChildrenToWorkSet(tx, peer->m_orphan_work_set);
- pfrom.nLastTXTime = GetTime();
+ pfrom.m_last_tx_time = GetTime<std::chrono::seconds>();
LogPrint(BCLog::MEMPOOL, "AcceptToMemoryPool: peer=%d: accepted %s (poolsz %u txn, %u kB)\n",
pfrom.GetId(),
@@ -4228,7 +4228,7 @@ void PeerManagerImpl::ConsiderEviction(CNode& pto, int64_t time_in_seconds)
}
}
-void PeerManagerImpl::EvictExtraOutboundPeers(int64_t time_in_seconds)
+void PeerManagerImpl::EvictExtraOutboundPeers(std::chrono::seconds now)
{
// If we have any extra block-relay-only peers, disconnect the youngest unless
// it's given us a block -- in which case, compare with the second-youngest, and
@@ -4237,14 +4237,14 @@ void PeerManagerImpl::EvictExtraOutboundPeers(int64_t time_in_seconds)
// to temporarily in order to sync our tip; see net.cpp.
// Note that we use higher nodeid as a measure for most recent connection.
if (m_connman.GetExtraBlockRelayCount() > 0) {
- std::pair<NodeId, int64_t> youngest_peer{-1, 0}, next_youngest_peer{-1, 0};
+ std::pair<NodeId, std::chrono::seconds> youngest_peer{-1, 0}, next_youngest_peer{-1, 0};
m_connman.ForEachNode([&](CNode* pnode) {
if (!pnode->IsBlockOnlyConn() || pnode->fDisconnect) return;
if (pnode->GetId() > youngest_peer.first) {
next_youngest_peer = youngest_peer;
youngest_peer.first = pnode->GetId();
- youngest_peer.second = pnode->nLastBlockTime;
+ youngest_peer.second = pnode->m_last_block_time;
}
});
NodeId to_disconnect = youngest_peer.first;
@@ -4262,13 +4262,14 @@ void PeerManagerImpl::EvictExtraOutboundPeers(int64_t time_in_seconds)
// valid headers chain with at least as much work as our tip.
CNodeState *node_state = State(pnode->GetId());
if (node_state == nullptr ||
- (time_in_seconds - pnode->nTimeConnected >= MINIMUM_CONNECT_TIME && node_state->nBlocksInFlight == 0)) {
+ (now - pnode->m_connected >= MINIMUM_CONNECT_TIME && node_state->nBlocksInFlight == 0)) {
pnode->fDisconnect = true;
- LogPrint(BCLog::NET, "disconnecting extra block-relay-only peer=%d (last block received at time %d)\n", pnode->GetId(), pnode->nLastBlockTime);
+ LogPrint(BCLog::NET, "disconnecting extra block-relay-only peer=%d (last block received at time %d)\n",
+ pnode->GetId(), count_seconds(pnode->m_last_block_time));
return true;
} else {
LogPrint(BCLog::NET, "keeping block-relay-only peer=%d chosen for eviction (connect time: %d, blocks_in_flight: %d)\n",
- pnode->GetId(), pnode->nTimeConnected, node_state->nBlocksInFlight);
+ pnode->GetId(), count_seconds(pnode->m_connected), node_state->nBlocksInFlight);
}
return false;
});
@@ -4308,12 +4309,13 @@ void PeerManagerImpl::EvictExtraOutboundPeers(int64_t time_in_seconds)
// Also don't disconnect any peer we're trying to download a
// block from.
CNodeState &state = *State(pnode->GetId());
- if (time_in_seconds - pnode->nTimeConnected > MINIMUM_CONNECT_TIME && state.nBlocksInFlight == 0) {
+ if (now - pnode->m_connected > MINIMUM_CONNECT_TIME && state.nBlocksInFlight == 0) {
LogPrint(BCLog::NET, "disconnecting extra outbound peer=%d (last block announcement received at time %d)\n", pnode->GetId(), oldest_block_announcement);
pnode->fDisconnect = true;
return true;
} else {
- LogPrint(BCLog::NET, "keeping outbound peer=%d chosen for eviction (connect time: %d, blocks_in_flight: %d)\n", pnode->GetId(), pnode->nTimeConnected, state.nBlocksInFlight);
+ LogPrint(BCLog::NET, "keeping outbound peer=%d chosen for eviction (connect time: %d, blocks_in_flight: %d)\n",
+ pnode->GetId(), count_seconds(pnode->m_connected), state.nBlocksInFlight);
return false;
}
});
@@ -4335,7 +4337,7 @@ void PeerManagerImpl::CheckForStaleTipAndEvictPeers()
int64_t time_in_seconds = GetTime();
- EvictExtraOutboundPeers(time_in_seconds);
+ EvictExtraOutboundPeers(std::chrono::seconds{time_in_seconds});
if (time_in_seconds > m_stale_tip_check_time) {
// Check whether our tip is stale, and if so, allow using an extra
@@ -4565,7 +4567,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
const auto current_time = GetTime<std::chrono::microseconds>();
- if (pto->IsAddrFetchConn() && current_time - std::chrono::seconds(pto->nTimeConnected) > 10 * AVG_ADDRESS_BROADCAST_INTERVAL) {
+ if (pto->IsAddrFetchConn() && current_time - pto->m_connected > 10 * AVG_ADDRESS_BROADCAST_INTERVAL) {
LogPrint(BCLog::NET, "addrfetch connection timeout; disconnecting peer=%d\n", pto->GetId());
pto->fDisconnect = true;
return true;
diff --git a/src/policy/packages.cpp b/src/policy/packages.cpp
index cfd0539965..21f5488816 100644
--- a/src/policy/packages.cpp
+++ b/src/policy/packages.cpp
@@ -60,3 +60,20 @@ bool CheckPackage(const Package& txns, PackageValidationState& state)
}
return true;
}
+
+bool IsChildWithParents(const Package& package)
+{
+ assert(std::all_of(package.cbegin(), package.cend(), [](const auto& tx){return tx != nullptr;}));
+ if (package.size() < 2) return false;
+
+ // The package is expected to be sorted, so the last transaction is the child.
+ const auto& child = package.back();
+ std::unordered_set<uint256, SaltedTxidHasher> input_txids;
+ std::transform(child->vin.cbegin(), child->vin.cend(),
+ std::inserter(input_txids, input_txids.end()),
+ [](const auto& input) { return input.prevout.hash; });
+
+ // Every transaction must be a parent of the last transaction in the package.
+ return std::all_of(package.cbegin(), package.cend() - 1,
+ [&input_txids](const auto& ptx) { return input_txids.count(ptx->GetHash()) > 0; });
+}
diff --git a/src/policy/packages.h b/src/policy/packages.h
index 6b7ac3e450..d2744f1265 100644
--- a/src/policy/packages.h
+++ b/src/policy/packages.h
@@ -41,4 +41,10 @@ class PackageValidationState : public ValidationState<PackageValidationResult> {
*/
bool CheckPackage(const Package& txns, PackageValidationState& state);
+/** Context-free check that a package is exactly one child and its parents; not all parents need to
+ * be present, but the package must not contain any transactions that are not the child's parents.
+ * It is expected to be sorted, which means the last transaction must be the child.
+ */
+bool IsChildWithParents(const Package& package);
+
#endif // BITCOIN_POLICY_PACKAGES_H
diff --git a/src/qt/modaloverlay.cpp b/src/qt/modaloverlay.cpp
index ae27cad477..cca77881e1 100644
--- a/src/qt/modaloverlay.cpp
+++ b/src/qt/modaloverlay.cpp
@@ -108,7 +108,7 @@ void ModalOverlay::tipUpdate(int count, const QDateTime& blockDate, double nVeri
if (sample.first < (currentDate.toMSecsSinceEpoch() - 500 * 1000) || i == blockProcessTime.size() - 1) {
progressDelta = blockProcessTime[0].second - sample.second;
timeDelta = blockProcessTime[0].first - sample.first;
- progressPerHour = progressDelta / (double) timeDelta * 1000 * 3600;
+ progressPerHour = (progressDelta > 0) ? progressDelta / (double)timeDelta * 1000 * 3600 : 0;
remainingMSecs = (progressDelta > 0) ? remainingProgress / progressDelta * timeDelta : -1;
break;
}
diff --git a/src/qt/rpcconsole.cpp b/src/qt/rpcconsole.cpp
index 5d08aff6b9..9f06d225c0 100644
--- a/src/qt/rpcconsole.cpp
+++ b/src/qt/rpcconsole.cpp
@@ -1173,9 +1173,9 @@ void RPCConsole::updateDetailWidget()
if (bip152_hb_settings.isEmpty()) bip152_hb_settings = ts.no;
ui->peerHighBandwidth->setText(bip152_hb_settings);
const auto time_now{GetTime<std::chrono::seconds>()};
- ui->peerConnTime->setText(GUIUtil::formatDurationStr(time_now - std::chrono::seconds{stats->nodeStats.nTimeConnected}));
- ui->peerLastBlock->setText(TimeDurationField(time_now, std::chrono::seconds{stats->nodeStats.nLastBlockTime}));
- ui->peerLastTx->setText(TimeDurationField(time_now, std::chrono::seconds{stats->nodeStats.nLastTXTime}));
+ ui->peerConnTime->setText(GUIUtil::formatDurationStr(time_now - stats->nodeStats.m_connected));
+ ui->peerLastBlock->setText(TimeDurationField(time_now, stats->nodeStats.m_last_block_time));
+ ui->peerLastTx->setText(TimeDurationField(time_now, stats->nodeStats.m_last_tx_time));
ui->peerLastSend->setText(TimeDurationField(time_now, stats->nodeStats.m_last_send));
ui->peerLastRecv->setText(TimeDurationField(time_now, stats->nodeStats.m_last_recv));
ui->peerBytesSent->setText(GUIUtil::formatBytes(stats->nodeStats.nSendBytes));
diff --git a/src/rpc/net.cpp b/src/rpc/net.cpp
index eea854c375..557b1296a6 100644
--- a/src/rpc/net.cpp
+++ b/src/rpc/net.cpp
@@ -198,11 +198,11 @@ static RPCHelpMan getpeerinfo()
obj.pushKV("relaytxes", stats.fRelayTxes);
obj.pushKV("lastsend", count_seconds(stats.m_last_send));
obj.pushKV("lastrecv", count_seconds(stats.m_last_recv));
- obj.pushKV("last_transaction", stats.nLastTXTime);
- obj.pushKV("last_block", stats.nLastBlockTime);
+ obj.pushKV("last_transaction", count_seconds(stats.m_last_tx_time));
+ obj.pushKV("last_block", count_seconds(stats.m_last_block_time));
obj.pushKV("bytessent", stats.nSendBytes);
obj.pushKV("bytesrecv", stats.nRecvBytes);
- obj.pushKV("conntime", stats.nTimeConnected);
+ obj.pushKV("conntime", count_seconds(stats.m_connected));
obj.pushKV("timeoffset", stats.nTimeOffset);
if (stats.m_last_ping_time > 0us) {
obj.pushKV("pingtime", CountSecondsDouble(stats.m_last_ping_time));
diff --git a/src/rpc/rawtransaction.cpp b/src/rpc/rawtransaction.cpp
index f2e8104e14..9be5c82311 100644
--- a/src/rpc/rawtransaction.cpp
+++ b/src/rpc/rawtransaction.cpp
@@ -1028,6 +1028,8 @@ static RPCHelpMan testmempoolaccept()
continue;
}
const auto& tx_result = it->second;
+ // Package testmempoolaccept doesn't allow transactions to already be in the mempool.
+ CHECK_NONFATAL(tx_result.m_result_type != MempoolAcceptResult::ResultType::MEMPOOL_ENTRY);
if (tx_result.m_result_type == MempoolAcceptResult::ResultType::VALID) {
const CAmount fee = tx_result.m_base_fees.value();
// Check that fee does not exceed maximum fee
diff --git a/src/test/denialofservice_tests.cpp b/src/test/denialofservice_tests.cpp
index f1abd5183a..1465e80157 100644
--- a/src/test/denialofservice_tests.cpp
+++ b/src/test/denialofservice_tests.cpp
@@ -131,6 +131,9 @@ BOOST_AUTO_TEST_CASE(stale_tip_peer_management)
options.m_max_outbound_full_relay = max_outbound_full_relay;
options.nMaxFeeler = MAX_FEELER_CONNECTIONS;
+ const auto time_init{GetTime<std::chrono::seconds>()};
+ SetMockTime(time_init);
+ const auto time_later{time_init + 3 * std::chrono::seconds{chainparams.GetConsensus().nPowTargetSpacing} + 1s};
connman->Init(options);
std::vector<CNode *> vNodes;
@@ -146,7 +149,7 @@ BOOST_AUTO_TEST_CASE(stale_tip_peer_management)
BOOST_CHECK(node->fDisconnect == false);
}
- SetMockTime(GetTime() + 3 * chainparams.GetConsensus().nPowTargetSpacing + 1);
+ SetMockTime(time_later);
// Now tip should definitely be stale, and we should look for an extra
// outbound peer
@@ -161,7 +164,9 @@ BOOST_AUTO_TEST_CASE(stale_tip_peer_management)
// If we add one more peer, something should get marked for eviction
// on the next check (since we're mocking the time to be in the future, the
// required time connected check should be satisfied).
+ SetMockTime(time_init);
AddRandomOutboundPeer(vNodes, *peerLogic, *connman, ConnectionType::OUTBOUND_FULL_RELAY);
+ SetMockTime(time_later);
peerLogic->CheckForStaleTipAndEvictPeers();
for (int i = 0; i < max_outbound_full_relay; ++i) {
@@ -237,7 +242,7 @@ BOOST_AUTO_TEST_CASE(block_relay_only_eviction)
// Update the last block time for the extra peer,
// and check that the next youngest peer gets evicted.
vNodes.back()->fDisconnect = false;
- vNodes.back()->nLastBlockTime = GetTime();
+ vNodes.back()->m_last_block_time = GetTime<std::chrono::seconds>();
peerLogic->CheckForStaleTipAndEvictPeers();
for (int i = 0; i < max_outbound_block_relay - 1; ++i) {
diff --git a/src/test/fuzz/node_eviction.cpp b/src/test/fuzz/node_eviction.cpp
index 64031fde42..bb0515cfab 100644
--- a/src/test/fuzz/node_eviction.cpp
+++ b/src/test/fuzz/node_eviction.cpp
@@ -21,10 +21,10 @@ FUZZ_TARGET(node_eviction)
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 10000) {
eviction_candidates.push_back({
/*id=*/fuzzed_data_provider.ConsumeIntegral<NodeId>(),
- /*nTimeConnected=*/fuzzed_data_provider.ConsumeIntegral<int64_t>(),
+ /*m_connected=*/std::chrono::seconds{fuzzed_data_provider.ConsumeIntegral<int64_t>()},
/*m_min_ping_time=*/std::chrono::microseconds{fuzzed_data_provider.ConsumeIntegral<int64_t>()},
- /*nLastBlockTime=*/fuzzed_data_provider.ConsumeIntegral<int64_t>(),
- /*nLastTXTime=*/fuzzed_data_provider.ConsumeIntegral<int64_t>(),
+ /*m_last_block_time=*/std::chrono::seconds{fuzzed_data_provider.ConsumeIntegral<int64_t>()},
+ /*m_last_tx_time=*/std::chrono::seconds{fuzzed_data_provider.ConsumeIntegral<int64_t>()},
/*fRelevantServices=*/fuzzed_data_provider.ConsumeBool(),
/*fRelayTxes=*/fuzzed_data_provider.ConsumeBool(),
/*fBloomFilter=*/fuzzed_data_provider.ConsumeBool(),
diff --git a/src/test/net_peer_eviction_tests.cpp b/src/test/net_peer_eviction_tests.cpp
index 9470ed814d..78ad24a408 100644
--- a/src/test/net_peer_eviction_tests.cpp
+++ b/src/test/net_peer_eviction_tests.cpp
@@ -64,11 +64,11 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
FastRandomContext random_context{true};
int num_peers{12};
- // Expect half of the peers with greatest uptime (the lowest nTimeConnected)
+ // Expect half of the peers with greatest uptime (the lowest m_connected)
// to be protected from eviction.
BOOST_CHECK(IsProtected(
num_peers, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = false;
c.m_network = NET_IPV4;
},
@@ -79,7 +79,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// Verify in the opposite direction.
BOOST_CHECK(IsProtected(
num_peers, [num_peers](NodeEvictionCandidate& c) {
- c.nTimeConnected = num_peers - c.id;
+ c.m_connected = std::chrono::seconds{num_peers - c.id};
c.m_is_local = false;
c.m_network = NET_IPV6;
},
@@ -101,10 +101,10 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
random_context));
// Expect 1/4 onion peers and 1/4 of the other peers to be protected,
- // sorted by longest uptime (lowest nTimeConnected), if no localhost or I2P peers.
+ // sorted by longest uptime (lowest m_connected), if no localhost or I2P peers.
BOOST_CHECK(IsProtected(
num_peers, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = false;
c.m_network = (c.id == 3 || c.id > 7) ? NET_ONION : NET_IPV6;
},
@@ -124,10 +124,10 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
random_context));
// Expect 1/4 localhost peers and 1/4 of the other peers to be protected,
- // sorted by longest uptime (lowest nTimeConnected), if no onion or I2P peers.
+ // sorted by longest uptime (lowest m_connected), if no onion or I2P peers.
BOOST_CHECK(IsProtected(
num_peers, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id > 6);
c.m_network = NET_IPV6;
},
@@ -147,10 +147,10 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
random_context));
// Expect 1/4 I2P peers and 1/4 of the other peers to be protected,
- // sorted by longest uptime (lowest nTimeConnected), if no onion or localhost peers.
+ // sorted by longest uptime (lowest m_connected), if no onion or localhost peers.
BOOST_CHECK(IsProtected(
num_peers, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = false;
c.m_network = (c.id == 4 || c.id > 8) ? NET_I2P : NET_IPV6;
},
@@ -165,7 +165,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// stable sort breaks tie with array order of localhost first.
BOOST_CHECK(IsProtected(
4, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 4);
c.m_network = (c.id == 3) ? NET_ONION : NET_IPV4;
},
@@ -178,7 +178,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// uptime; stable sort breaks tie with array order of localhost first.
BOOST_CHECK(IsProtected(
7, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 6);
c.m_network = (c.id == 5) ? NET_ONION : NET_IPV4;
},
@@ -191,7 +191,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// by uptime; stable sort breaks tie with array order of localhost first.
BOOST_CHECK(IsProtected(
8, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 6);
c.m_network = (c.id == 5) ? NET_ONION : NET_IPV4;
},
@@ -204,7 +204,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// uptime; stable sort breaks ties with the array order of localhost first.
BOOST_CHECK(IsProtected(
num_peers, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 6 || c.id == 9 || c.id == 11);
c.m_network = (c.id == 7 || c.id == 8 || c.id == 10) ? NET_ONION : NET_IPV6;
},
@@ -216,7 +216,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// protect 2 localhost and 1 onion, plus 3 other peers, sorted by longest uptime.
BOOST_CHECK(IsProtected(
num_peers, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id > 4 && c.id < 9);
c.m_network = (c.id == 10) ? NET_ONION : NET_IPV4;
},
@@ -228,7 +228,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// protect 2 localhost and 2 onions, plus 4 other peers, sorted by longest uptime.
BOOST_CHECK(IsProtected(
16, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 6 || c.id == 9 || c.id == 11 || c.id == 12);
c.m_network = (c.id == 8 || c.id == 10) ? NET_ONION : NET_IPV6;
},
@@ -241,7 +241,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// others, sorted by longest uptime.
BOOST_CHECK(IsProtected(
16, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id > 10);
c.m_network = (c.id == 10) ? NET_ONION : NET_IPV4;
},
@@ -254,7 +254,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// plus 4 others, sorted by longest uptime.
BOOST_CHECK(IsProtected(
16, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 15);
c.m_network = (c.id > 6 && c.id < 11) ? NET_ONION : NET_IPV6;
},
@@ -267,7 +267,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// others, sorted by longest uptime.
BOOST_CHECK(IsProtected(
num_peers, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = false;
if (c.id == 8 || c.id == 10) {
c.m_network = NET_ONION;
@@ -288,7 +288,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// by longest uptime; stable sort breaks tie with array order of I2P first.
BOOST_CHECK(IsProtected(
4, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 3);
if (c.id == 4) {
c.m_network = NET_I2P;
@@ -307,7 +307,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// by longest uptime; stable sort breaks tie with array order of I2P first.
BOOST_CHECK(IsProtected(
7, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 4);
if (c.id == 6) {
c.m_network = NET_I2P;
@@ -326,7 +326,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// by uptime; stable sort breaks tie with array order of I2P then localhost.
BOOST_CHECK(IsProtected(
8, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 6);
if (c.id == 5) {
c.m_network = NET_I2P;
@@ -345,7 +345,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// for 8 total, sorted by longest uptime.
BOOST_CHECK(IsProtected(
16, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 6 || c.id > 11);
if (c.id == 7 || c.id == 11) {
c.m_network = NET_I2P;
@@ -364,7 +364,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// sorted by longest uptime.
BOOST_CHECK(IsProtected(
24, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 12);
if (c.id > 14 && c.id < 23) { // 4 protected instead of usual 2
c.m_network = NET_I2P;
@@ -383,7 +383,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// unused localhost slot), plus 6 others for 12/24 total, sorted by longest uptime.
BOOST_CHECK(IsProtected(
24, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 15);
if (c.id == 12 || c.id == 14 || c.id == 17) {
c.m_network = NET_I2P;
@@ -402,7 +402,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// for 12/24 total, sorted by longest uptime.
BOOST_CHECK(IsProtected(
24, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id == 13);
if (c.id > 16) {
c.m_network = NET_I2P;
@@ -421,7 +421,7 @@ BOOST_AUTO_TEST_CASE(peer_protection_test)
// by longest uptime.
BOOST_CHECK(IsProtected(
24, [](NodeEvictionCandidate& c) {
- c.nTimeConnected = c.id;
+ c.m_connected = std::chrono::seconds{c.id};
c.m_is_local = (c.id > 15);
if (c.id > 10 && c.id < 15) {
c.m_network = NET_I2P;
@@ -484,7 +484,7 @@ BOOST_AUTO_TEST_CASE(peer_eviction_test)
// into our mempool should be protected from eviction.
BOOST_CHECK(!IsEvicted(
number_of_nodes, [number_of_nodes](NodeEvictionCandidate& candidate) {
- candidate.nLastTXTime = number_of_nodes - candidate.id;
+ candidate.m_last_tx_time = std::chrono::seconds{number_of_nodes - candidate.id};
},
{0, 1, 2, 3}, random_context));
@@ -492,7 +492,7 @@ BOOST_AUTO_TEST_CASE(peer_eviction_test)
// blocks should be protected from eviction.
BOOST_CHECK(!IsEvicted(
number_of_nodes, [number_of_nodes](NodeEvictionCandidate& candidate) {
- candidate.nLastBlockTime = number_of_nodes - candidate.id;
+ candidate.m_last_block_time = std::chrono::seconds{number_of_nodes - candidate.id};
if (candidate.id <= 7) {
candidate.fRelayTxes = false;
candidate.fRelevantServices = true;
@@ -504,14 +504,14 @@ BOOST_AUTO_TEST_CASE(peer_eviction_test)
// protected from eviction.
BOOST_CHECK(!IsEvicted(
number_of_nodes, [number_of_nodes](NodeEvictionCandidate& candidate) {
- candidate.nLastBlockTime = number_of_nodes - candidate.id;
+ candidate.m_last_block_time = std::chrono::seconds{number_of_nodes - candidate.id};
},
{0, 1, 2, 3}, random_context));
// Combination of the previous two tests.
BOOST_CHECK(!IsEvicted(
number_of_nodes, [number_of_nodes](NodeEvictionCandidate& candidate) {
- candidate.nLastBlockTime = number_of_nodes - candidate.id;
+ candidate.m_last_block_time = std::chrono::seconds{number_of_nodes - candidate.id};
if (candidate.id <= 7) {
candidate.fRelayTxes = false;
candidate.fRelevantServices = true;
@@ -524,8 +524,8 @@ BOOST_AUTO_TEST_CASE(peer_eviction_test)
number_of_nodes, [number_of_nodes](NodeEvictionCandidate& candidate) {
candidate.nKeyedNetGroup = number_of_nodes - candidate.id; // 4 protected
candidate.m_min_ping_time = std::chrono::microseconds{candidate.id}; // 8 protected
- candidate.nLastTXTime = number_of_nodes - candidate.id; // 4 protected
- candidate.nLastBlockTime = number_of_nodes - candidate.id; // 4 protected
+ candidate.m_last_tx_time = std::chrono::seconds{number_of_nodes - candidate.id}; // 4 protected
+ candidate.m_last_block_time = std::chrono::seconds{number_of_nodes - candidate.id}; // 4 protected
},
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19}, random_context));
diff --git a/src/test/txpackage_tests.cpp b/src/test/txpackage_tests.cpp
index 2193e21780..6f78b43826 100644
--- a/src/test/txpackage_tests.cpp
+++ b/src/test/txpackage_tests.cpp
@@ -114,4 +114,217 @@ BOOST_FIXTURE_TEST_CASE(package_validation_tests, TestChain100Setup)
// Check that mempool size hasn't changed.
BOOST_CHECK_EQUAL(m_node.mempool->size(), initialPoolSize);
}
+
+BOOST_FIXTURE_TEST_CASE(noncontextual_package_tests, TestChain100Setup)
+{
+ // The signatures won't be verified so we can just use a placeholder
+ CKey placeholder_key;
+ placeholder_key.MakeNewKey(true);
+ CScript spk = GetScriptForDestination(PKHash(placeholder_key.GetPubKey()));
+ CKey placeholder_key_2;
+ placeholder_key_2.MakeNewKey(true);
+ CScript spk2 = GetScriptForDestination(PKHash(placeholder_key_2.GetPubKey()));
+
+ // Parent and Child Package
+ {
+ auto mtx_parent = CreateValidMempoolTransaction(m_coinbase_txns[0], 0, 0, coinbaseKey, spk,
+ CAmount(49 * COIN), /* submit */ false);
+ CTransactionRef tx_parent = MakeTransactionRef(mtx_parent);
+
+ auto mtx_child = CreateValidMempoolTransaction(tx_parent, 0, 101, placeholder_key, spk2,
+ CAmount(48 * COIN), /* submit */ false);
+ CTransactionRef tx_child = MakeTransactionRef(mtx_child);
+
+ PackageValidationState state;
+ BOOST_CHECK(CheckPackage({tx_parent, tx_child}, state));
+ BOOST_CHECK(!CheckPackage({tx_child, tx_parent}, state));
+ BOOST_CHECK_EQUAL(state.GetResult(), PackageValidationResult::PCKG_POLICY);
+ BOOST_CHECK_EQUAL(state.GetRejectReason(), "package-not-sorted");
+ BOOST_CHECK(IsChildWithParents({tx_parent, tx_child}));
+ }
+
+ // 24 Parents and 1 Child
+ {
+ Package package;
+ CMutableTransaction child;
+ for (int i{0}; i < 24; ++i) {
+ auto parent = MakeTransactionRef(CreateValidMempoolTransaction(m_coinbase_txns[i + 1],
+ 0, 0, coinbaseKey, spk, CAmount(48 * COIN), false));
+ package.emplace_back(parent);
+ child.vin.push_back(CTxIn(COutPoint(parent->GetHash(), 0)));
+ }
+ child.vout.push_back(CTxOut(47 * COIN, spk2));
+
+ // The child must be in the package.
+ BOOST_CHECK(!IsChildWithParents(package));
+
+ // The parents can be in any order.
+ FastRandomContext rng;
+ Shuffle(package.begin(), package.end(), rng);
+ package.push_back(MakeTransactionRef(child));
+
+ PackageValidationState state;
+ BOOST_CHECK(CheckPackage(package, state));
+ BOOST_CHECK(IsChildWithParents(package));
+
+ package.erase(package.begin());
+ BOOST_CHECK(IsChildWithParents(package));
+
+ // The package cannot have unrelated transactions.
+ package.insert(package.begin(), m_coinbase_txns[0]);
+ BOOST_CHECK(!IsChildWithParents(package));
+ }
+
+ // 2 Parents and 1 Child where one parent depends on the other.
+ {
+ CMutableTransaction mtx_parent;
+ mtx_parent.vin.push_back(CTxIn(COutPoint(m_coinbase_txns[0]->GetHash(), 0)));
+ mtx_parent.vout.push_back(CTxOut(20 * COIN, spk));
+ mtx_parent.vout.push_back(CTxOut(20 * COIN, spk2));
+ CTransactionRef tx_parent = MakeTransactionRef(mtx_parent);
+
+ CMutableTransaction mtx_parent_also_child;
+ mtx_parent_also_child.vin.push_back(CTxIn(COutPoint(tx_parent->GetHash(), 0)));
+ mtx_parent_also_child.vout.push_back(CTxOut(20 * COIN, spk));
+ CTransactionRef tx_parent_also_child = MakeTransactionRef(mtx_parent_also_child);
+
+ CMutableTransaction mtx_child;
+ mtx_child.vin.push_back(CTxIn(COutPoint(tx_parent->GetHash(), 1)));
+ mtx_child.vin.push_back(CTxIn(COutPoint(tx_parent_also_child->GetHash(), 0)));
+ mtx_child.vout.push_back(CTxOut(39 * COIN, spk));
+ CTransactionRef tx_child = MakeTransactionRef(mtx_child);
+
+ PackageValidationState state;
+ BOOST_CHECK(IsChildWithParents({tx_parent, tx_parent_also_child}));
+ BOOST_CHECK(IsChildWithParents({tx_parent, tx_child}));
+ BOOST_CHECK(IsChildWithParents({tx_parent, tx_parent_also_child, tx_child}));
+ // IsChildWithParents does not detect unsorted parents.
+ BOOST_CHECK(IsChildWithParents({tx_parent_also_child, tx_parent, tx_child}));
+ BOOST_CHECK(CheckPackage({tx_parent, tx_parent_also_child, tx_child}, state));
+ BOOST_CHECK(!CheckPackage({tx_parent_also_child, tx_parent, tx_child}, state));
+ BOOST_CHECK_EQUAL(state.GetResult(), PackageValidationResult::PCKG_POLICY);
+ BOOST_CHECK_EQUAL(state.GetRejectReason(), "package-not-sorted");
+ }
+}
+
+BOOST_FIXTURE_TEST_CASE(package_submission_tests, TestChain100Setup)
+{
+ LOCK(cs_main);
+ unsigned int expected_pool_size = m_node.mempool->size();
+ CKey parent_key;
+ parent_key.MakeNewKey(true);
+ CScript parent_locking_script = GetScriptForDestination(PKHash(parent_key.GetPubKey()));
+
+ // Unrelated transactions are not allowed in package submission.
+ Package package_unrelated;
+ for (size_t i{0}; i < 10; ++i) {
+ auto mtx = CreateValidMempoolTransaction(/* input_transaction */ m_coinbase_txns[i + 25], /* vout */ 0,
+ /* input_height */ 0, /* input_signing_key */ coinbaseKey,
+ /* output_destination */ parent_locking_script,
+ /* output_amount */ CAmount(49 * COIN), /* submit */ false);
+ package_unrelated.emplace_back(MakeTransactionRef(mtx));
+ }
+ auto result_unrelated_submit = ProcessNewPackage(m_node.chainman->ActiveChainstate(), *m_node.mempool,
+ package_unrelated, /* test_accept */ false);
+ BOOST_CHECK(result_unrelated_submit.m_state.IsInvalid());
+ BOOST_CHECK_EQUAL(result_unrelated_submit.m_state.GetResult(), PackageValidationResult::PCKG_POLICY);
+ BOOST_CHECK_EQUAL(result_unrelated_submit.m_state.GetRejectReason(), "package-not-child-with-parents");
+ BOOST_CHECK_EQUAL(m_node.mempool->size(), expected_pool_size);
+
+ // Parent and Child (and Grandchild) Package
+ Package package_parent_child;
+ Package package_3gen;
+ auto mtx_parent = CreateValidMempoolTransaction(/* input_transaction */ m_coinbase_txns[0], /* vout */ 0,
+ /* input_height */ 0, /* input_signing_key */ coinbaseKey,
+ /* output_destination */ parent_locking_script,
+ /* output_amount */ CAmount(49 * COIN), /* submit */ false);
+ CTransactionRef tx_parent = MakeTransactionRef(mtx_parent);
+ package_parent_child.push_back(tx_parent);
+ package_3gen.push_back(tx_parent);
+
+ CKey child_key;
+ child_key.MakeNewKey(true);
+ CScript child_locking_script = GetScriptForDestination(PKHash(child_key.GetPubKey()));
+ auto mtx_child = CreateValidMempoolTransaction(/* input_transaction */ tx_parent, /* vout */ 0,
+ /* input_height */ 101, /* input_signing_key */ parent_key,
+ /* output_destination */ child_locking_script,
+ /* output_amount */ CAmount(48 * COIN), /* submit */ false);
+ CTransactionRef tx_child = MakeTransactionRef(mtx_child);
+ package_parent_child.push_back(tx_child);
+ package_3gen.push_back(tx_child);
+
+ CKey grandchild_key;
+ grandchild_key.MakeNewKey(true);
+ CScript grandchild_locking_script = GetScriptForDestination(PKHash(grandchild_key.GetPubKey()));
+ auto mtx_grandchild = CreateValidMempoolTransaction(/* input_transaction */ tx_child, /* vout */ 0,
+ /* input_height */ 101, /* input_signing_key */ child_key,
+ /* output_destination */ grandchild_locking_script,
+ /* output_amount */ CAmount(47 * COIN), /* submit */ false);
+ CTransactionRef tx_grandchild = MakeTransactionRef(mtx_grandchild);
+ package_3gen.push_back(tx_grandchild);
+
+ // 3 Generations is not allowed.
+ {
+ auto result_3gen_submit = ProcessNewPackage(m_node.chainman->ActiveChainstate(), *m_node.mempool,
+ package_3gen, /* test_accept */ false);
+ BOOST_CHECK(result_3gen_submit.m_state.IsInvalid());
+ BOOST_CHECK_EQUAL(result_3gen_submit.m_state.GetResult(), PackageValidationResult::PCKG_POLICY);
+ BOOST_CHECK_EQUAL(result_3gen_submit.m_state.GetRejectReason(), "package-not-child-with-parents");
+ BOOST_CHECK_EQUAL(m_node.mempool->size(), expected_pool_size);
+ }
+
+ // Child with missing parent.
+ mtx_child.vin.push_back(CTxIn(COutPoint(package_unrelated[0]->GetHash(), 0)));
+ Package package_missing_parent;
+ package_missing_parent.push_back(tx_parent);
+ package_missing_parent.push_back(MakeTransactionRef(mtx_child));
+ {
+ const auto result_missing_parent = ProcessNewPackage(m_node.chainman->ActiveChainstate(), *m_node.mempool,
+ package_missing_parent, /* test_accept */ false);
+ BOOST_CHECK(result_missing_parent.m_state.IsInvalid());
+ BOOST_CHECK_EQUAL(result_missing_parent.m_state.GetResult(), PackageValidationResult::PCKG_POLICY);
+ BOOST_CHECK_EQUAL(result_missing_parent.m_state.GetRejectReason(), "package-not-child-with-unconfirmed-parents");
+ BOOST_CHECK_EQUAL(m_node.mempool->size(), expected_pool_size);
+
+ }
+
+ // Submit package with parent + child.
+ {
+ const auto submit_parent_child = ProcessNewPackage(m_node.chainman->ActiveChainstate(), *m_node.mempool,
+ package_parent_child, /* test_accept */ false);
+ expected_pool_size += 2;
+ BOOST_CHECK_MESSAGE(submit_parent_child.m_state.IsValid(),
+ "Package validation unexpectedly failed: " << submit_parent_child.m_state.GetRejectReason());
+ auto it_parent = submit_parent_child.m_tx_results.find(tx_parent->GetWitnessHash());
+ auto it_child = submit_parent_child.m_tx_results.find(tx_child->GetWitnessHash());
+ BOOST_CHECK(it_parent != submit_parent_child.m_tx_results.end());
+ BOOST_CHECK(it_parent->second.m_state.IsValid());
+ BOOST_CHECK(it_child != submit_parent_child.m_tx_results.end());
+ BOOST_CHECK(it_child->second.m_state.IsValid());
+
+ BOOST_CHECK_EQUAL(m_node.mempool->size(), expected_pool_size);
+ BOOST_CHECK(m_node.mempool->exists(GenTxid::Txid(tx_parent->GetHash())));
+ BOOST_CHECK(m_node.mempool->exists(GenTxid::Txid(tx_child->GetHash())));
+ }
+
+ // Already-in-mempool transactions should be detected and de-duplicated.
+ {
+ const auto submit_deduped = ProcessNewPackage(m_node.chainman->ActiveChainstate(), *m_node.mempool,
+ package_parent_child, /* test_accept */ false);
+ BOOST_CHECK_MESSAGE(submit_deduped.m_state.IsValid(),
+ "Package validation unexpectedly failed: " << submit_deduped.m_state.GetRejectReason());
+ auto it_parent_deduped = submit_deduped.m_tx_results.find(tx_parent->GetWitnessHash());
+ auto it_child_deduped = submit_deduped.m_tx_results.find(tx_child->GetWitnessHash());
+ BOOST_CHECK(it_parent_deduped != submit_deduped.m_tx_results.end());
+ BOOST_CHECK(it_parent_deduped->second.m_state.IsValid());
+ BOOST_CHECK(it_parent_deduped->second.m_result_type == MempoolAcceptResult::ResultType::MEMPOOL_ENTRY);
+ BOOST_CHECK(it_child_deduped != submit_deduped.m_tx_results.end());
+ BOOST_CHECK(it_child_deduped->second.m_state.IsValid());
+ BOOST_CHECK(it_child_deduped->second.m_result_type == MempoolAcceptResult::ResultType::MEMPOOL_ENTRY);
+
+ BOOST_CHECK_EQUAL(m_node.mempool->size(), expected_pool_size);
+ BOOST_CHECK(m_node.mempool->exists(GenTxid::Txid(tx_parent->GetHash())));
+ BOOST_CHECK(m_node.mempool->exists(GenTxid::Txid(tx_child->GetHash())));
+ }
+}
BOOST_AUTO_TEST_SUITE_END()
diff --git a/src/test/util/net.cpp b/src/test/util/net.cpp
index 696fd902f8..4f15feb8e6 100644
--- a/src/test/util/net.cpp
+++ b/src/test/util/net.cpp
@@ -47,10 +47,10 @@ std::vector<NodeEvictionCandidate> GetRandomNodeEvictionCandidates(int n_candida
for (int id = 0; id < n_candidates; ++id) {
candidates.push_back({
/*id=*/id,
- /*nTimeConnected=*/static_cast<int64_t>(random_context.randrange(100)),
+ /*m_connected=*/std::chrono::seconds{random_context.randrange(100)},
/*m_min_ping_time=*/std::chrono::microseconds{random_context.randrange(100)},
- /*nLastBlockTime=*/static_cast<int64_t>(random_context.randrange(100)),
- /*nLastTXTime=*/static_cast<int64_t>(random_context.randrange(100)),
+ /*m_last_block_time=*/std::chrono::seconds{random_context.randrange(100)},
+ /*m_last_tx_time=*/std::chrono::seconds{random_context.randrange(100)},
/*fRelevantServices=*/random_context.randbool(),
/*fRelayTxes=*/random_context.randbool(),
/*fBloomFilter=*/random_context.randbool(),
diff --git a/src/validation.cpp b/src/validation.cpp
index 1aac71fb0f..446c21e118 100644
--- a/src/validation.cpp
+++ b/src/validation.cpp
@@ -465,6 +465,11 @@ public:
* any transaction spending the same inputs as a transaction in the mempool is considered
* a conflict. */
const bool m_allow_bip125_replacement;
+ /** When true, the mempool will not be trimmed when individual transactions are submitted in
+ * Finalize(). Instead, limits should be enforced at the end to ensure the package is not
+ * partially submitted.
+ */
+ const bool m_package_submission;
/** Parameters for single transaction mempool validation. */
static ATMPArgs SingleAccept(const CChainParams& chainparams, int64_t accept_time,
@@ -476,6 +481,7 @@ public:
/* m_coins_to_uncache */ coins_to_uncache,
/* m_test_accept */ test_accept,
/* m_allow_bip125_replacement */ true,
+ /* m_package_submission */ false,
};
}
@@ -488,9 +494,22 @@ public:
/* m_coins_to_uncache */ coins_to_uncache,
/* m_test_accept */ true,
/* m_allow_bip125_replacement */ false,
+ /* m_package_submission */ false, // not submitting to mempool
};
}
+ /** Parameters for child-with-unconfirmed-parents package validation. */
+ static ATMPArgs PackageChildWithParents(const CChainParams& chainparams, int64_t accept_time,
+ std::vector<COutPoint>& coins_to_uncache) {
+ return ATMPArgs{/* m_chainparams */ chainparams,
+ /* m_accept_time */ accept_time,
+ /* m_bypass_limits */ false,
+ /* m_coins_to_uncache */ coins_to_uncache,
+ /* m_test_accept */ false,
+ /* m_allow_bip125_replacement */ false,
+ /* m_package_submission */ true,
+ };
+ }
// No default ctor to avoid exposing details to clients and allowing the possibility of
// mixing up the order of the arguments. Use static functions above instead.
ATMPArgs() = delete;
@@ -500,12 +519,18 @@ public:
MempoolAcceptResult AcceptSingleTransaction(const CTransactionRef& ptx, ATMPArgs& args) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
/**
- * Multiple transaction acceptance. Transactions may or may not be interdependent,
- * but must not conflict with each other. Parents must come before children if any
- * dependencies exist.
+ * Multiple transaction acceptance. Transactions may or may not be interdependent, but must not
+ * conflict with each other, and the transactions cannot already be in the mempool. Parents must
+ * come before children if any dependencies exist.
*/
PackageMempoolAcceptResult AcceptMultipleTransactions(const std::vector<CTransactionRef>& txns, ATMPArgs& args) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
+ /**
+ * Package (more specific than just multiple transactions) acceptance. Package must be a child
+ * with all of its unconfirmed parents, and topologically sorted.
+ */
+ PackageMempoolAcceptResult AcceptPackage(const Package& package, ATMPArgs& args) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
+
private:
// All the intermediate state that gets passed between the various levels
// of checking a given transaction.
@@ -578,6 +603,14 @@ private:
// limiting is performed, false otherwise.
bool Finalize(const ATMPArgs& args, Workspace& ws) EXCLUSIVE_LOCKS_REQUIRED(cs_main, m_pool.cs);
+ // Submit all transactions to the mempool and call ConsensusScriptChecks to add to the script
+ // cache - should only be called after successful validation of all transactions in the package.
+ // The package may end up partially-submitted after size limitting; returns true if all
+ // transactions are successfully added to the mempool, false otherwise.
+ bool FinalizePackage(const ATMPArgs& args, std::vector<Workspace>& workspaces, PackageValidationState& package_state,
+ std::map<const uint256, const MempoolAcceptResult>& results)
+ EXCLUSIVE_LOCKS_REQUIRED(cs_main, m_pool.cs);
+
// Compare a package's feerate against minimum allowed.
bool CheckFeeRate(size_t package_size, CAmount package_fee, TxValidationState& state) EXCLUSIVE_LOCKS_REQUIRED(cs_main, m_pool.cs)
{
@@ -899,6 +932,10 @@ bool MemPoolAccept::PackageMempoolChecks(const std::vector<CTransactionRef>& txn
AssertLockHeld(cs_main);
AssertLockHeld(m_pool.cs);
+ // CheckPackageLimits expects the package transactions to not already be in the mempool.
+ assert(std::all_of(txns.cbegin(), txns.cend(), [this](const auto& tx)
+ { return !m_pool.exists(GenTxid::Txid(tx->GetHash()));}));
+
std::string err_string;
if (!m_pool.CheckPackageLimits(txns, m_limit_ancestors, m_limit_ancestor_size, m_limit_descendants,
m_limit_descendant_size, err_string)) {
@@ -991,13 +1028,18 @@ bool MemPoolAccept::Finalize(const ATMPArgs& args, Workspace& ws)
// - it's not being re-added during a reorg which bypasses typical mempool fee limits
// - the node is not behind
// - the transaction is not dependent on any other transactions in the mempool
- bool validForFeeEstimation = !bypass_limits && IsCurrentForFeeEstimation(m_active_chainstate) && m_pool.HasNoInputsOf(tx);
+ // - it's not part of a package. Since package relay is not currently supported, this
+ // transaction has not necessarily been accepted to miners' mempools.
+ bool validForFeeEstimation = !bypass_limits && !args.m_package_submission && IsCurrentForFeeEstimation(m_active_chainstate) && m_pool.HasNoInputsOf(tx);
// Store transaction in memory
m_pool.addUnchecked(*entry, ws.m_ancestors, validForFeeEstimation);
// trim mempool and check if tx was trimmed
- if (!bypass_limits) {
+ // If we are validating a package, don't trim here because we could evict a previous transaction
+ // in the package. LimitMempoolSize() should be called at the very end to make sure the mempool
+ // is still within limits and package submission happens atomically.
+ if (!args.m_package_submission && !bypass_limits) {
LimitMempoolSize(m_pool, m_active_chainstate.CoinsTip(), gArgs.GetIntArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000, std::chrono::hours{gArgs.GetIntArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY)});
if (!m_pool.exists(GenTxid::Txid(hash)))
return state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "mempool full");
@@ -1005,6 +1047,69 @@ bool MemPoolAccept::Finalize(const ATMPArgs& args, Workspace& ws)
return true;
}
+bool MemPoolAccept::FinalizePackage(const ATMPArgs& args, std::vector<Workspace>& workspaces,
+ PackageValidationState& package_state,
+ std::map<const uint256, const MempoolAcceptResult>& results)
+{
+ AssertLockHeld(cs_main);
+ AssertLockHeld(m_pool.cs);
+ bool all_submitted = true;
+ // ConsensusScriptChecks adds to the script cache and is therefore consensus-critical;
+ // CheckInputsFromMempoolAndCache asserts that transactions only spend coins available from the
+ // mempool or UTXO set. Submit each transaction to the mempool immediately after calling
+ // ConsensusScriptChecks to make the outputs available for subsequent transactions.
+ for (Workspace& ws : workspaces) {
+ if (!ConsensusScriptChecks(args, ws)) {
+ results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state));
+ // Since PolicyScriptChecks() passed, this should never fail.
+ all_submitted = Assume(false);
+ }
+
+ // Re-calculate mempool ancestors to call addUnchecked(). They may have changed since the
+ // last calculation done in PreChecks, since package ancestors have already been submitted.
+ std::string err_string;
+ if(!m_pool.CalculateMemPoolAncestors(*ws.m_entry, ws.m_ancestors, m_limit_ancestors,
+ m_limit_ancestor_size, m_limit_descendants,
+ m_limit_descendant_size, err_string)) {
+ results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state));
+ // Since PreChecks() and PackageMempoolChecks() both enforce limits, this should never fail.
+ all_submitted = Assume(false);
+ }
+ // If we call LimitMempoolSize() for each individual Finalize(), the mempool will not take
+ // the transaction's descendant feerate into account because it hasn't seen them yet. Also,
+ // we risk evicting a transaction that a subsequent package transaction depends on. Instead,
+ // allow the mempool to temporarily bypass limits, the maximum package size) while
+ // submitting transactions individually and then trim at the very end.
+ if (!Finalize(args, ws)) {
+ results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state));
+ // Since LimitMempoolSize() won't be called, this should never fail.
+ all_submitted = Assume(false);
+ }
+ }
+
+ // It may or may not be the case that all the transactions made it into the mempool. Regardless,
+ // make sure we haven't exceeded max mempool size.
+ LimitMempoolSize(m_pool, m_active_chainstate.CoinsTip(),
+ gArgs.GetIntArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000,
+ std::chrono::hours{gArgs.GetIntArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY)});
+ if (!all_submitted) return false;
+
+ // Find the wtxids of the transactions that made it into the mempool. Allow partial submission,
+ // but don't report success unless they all made it into the mempool.
+ for (Workspace& ws : workspaces) {
+ if (m_pool.exists(GenTxid::Wtxid(ws.m_ptx->GetWitnessHash()))) {
+ results.emplace(ws.m_ptx->GetWitnessHash(),
+ MempoolAcceptResult::Success(std::move(ws.m_replaced_transactions), ws.m_vsize, ws.m_base_fees));
+ GetMainSignals().TransactionAddedToMempool(ws.m_ptx, m_pool.GetAndIncrementSequence());
+ } else {
+ all_submitted = false;
+ ws.m_state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "mempool full");
+ results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state));
+ }
+ }
+ return all_submitted;
+}
+
MempoolAcceptResult MemPoolAccept::AcceptSingleTransaction(const CTransactionRef& ptx, ATMPArgs& args)
{
AssertLockHeld(cs_main);
@@ -1090,9 +1195,114 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
}
}
+ if (args.m_test_accept) return PackageMempoolAcceptResult(package_state, std::move(results));
+
+ if (!FinalizePackage(args, workspaces, package_state, results)) {
+ package_state.Invalid(PackageValidationResult::PCKG_TX, "submission failed");
+ return PackageMempoolAcceptResult(package_state, std::move(results));
+ }
+
return PackageMempoolAcceptResult(package_state, std::move(results));
}
+PackageMempoolAcceptResult MemPoolAccept::AcceptPackage(const Package& package, ATMPArgs& args)
+{
+ AssertLockHeld(cs_main);
+ PackageValidationState package_state;
+
+ // Check that the package is well-formed. If it isn't, we won't try to validate any of the
+ // transactions and thus won't return any MempoolAcceptResults, just a package-wide error.
+
+ // Context-free package checks.
+ if (!CheckPackage(package, package_state)) return PackageMempoolAcceptResult(package_state, {});
+
+ // All transactions in the package must be a parent of the last transaction. This is just an
+ // opportunity for us to fail fast on a context-free check without taking the mempool lock.
+ if (!IsChildWithParents(package)) {
+ package_state.Invalid(PackageValidationResult::PCKG_POLICY, "package-not-child-with-parents");
+ return PackageMempoolAcceptResult(package_state, {});
+ }
+
+ const auto& child = package[package.size() - 1];
+ // The package must be 1 child with all of its unconfirmed parents. The package is expected to
+ // be sorted, so the last transaction is the child.
+ std::unordered_set<uint256, SaltedTxidHasher> unconfirmed_parent_txids;
+ std::transform(package.cbegin(), package.end() - 1,
+ std::inserter(unconfirmed_parent_txids, unconfirmed_parent_txids.end()),
+ [](const auto& tx) { return tx->GetHash(); });
+
+ // All child inputs must refer to a preceding package transaction or a confirmed UTXO. The only
+ // way to verify this is to look up the child's inputs in our current coins view (not including
+ // mempool), and enforce that all parents not present in the package be available at chain tip.
+ // Since this check can bring new coins into the coins cache, keep track of these coins and
+ // uncache them if we don't end up submitting this package to the mempool.
+ const CCoinsViewCache& coins_tip_cache = m_active_chainstate.CoinsTip();
+ for (const auto& input : child->vin) {
+ if (!coins_tip_cache.HaveCoinInCache(input.prevout)) {
+ args.m_coins_to_uncache.push_back(input.prevout);
+ }
+ }
+ // Using the MemPoolAccept m_view cache allows us to look up these same coins faster later.
+ // This should be connecting directly to CoinsTip, not to m_viewmempool, because we specifically
+ // require inputs to be confirmed if they aren't in the package.
+ m_view.SetBackend(m_active_chainstate.CoinsTip());
+ const auto package_or_confirmed = [this, &unconfirmed_parent_txids](const auto& input) {
+ return unconfirmed_parent_txids.count(input.prevout.hash) > 0 || m_view.HaveCoin(input.prevout);
+ };
+ if (!std::all_of(child->vin.cbegin(), child->vin.cend(), package_or_confirmed)) {
+ package_state.Invalid(PackageValidationResult::PCKG_POLICY, "package-not-child-with-unconfirmed-parents");
+ return PackageMempoolAcceptResult(package_state, {});
+ }
+ // Protect against bugs where we pull more inputs from disk that miss being added to
+ // coins_to_uncache. The backend will be connected again when needed in PreChecks.
+ m_view.SetBackend(m_dummy);
+
+ LOCK(m_pool.cs);
+ std::map<const uint256, const MempoolAcceptResult> results;
+ // As node operators are free to set their mempool policies however they please, it's possible
+ // for package transaction(s) to already be in the mempool, and we don't want to reject the
+ // entire package in that case (as that could be a censorship vector). Filter the transactions
+ // that are already in mempool and add their information to results, since we already have them.
+ std::vector<CTransactionRef> txns_new;
+ for (const auto& tx : package) {
+ const auto& wtxid = tx->GetWitnessHash();
+ const auto& txid = tx->GetHash();
+ // There are 3 possibilities: already in mempool, same-txid-diff-wtxid already in mempool,
+ // or not in mempool. An already confirmed tx is treated as one not in mempool, because all
+ // we know is that the inputs aren't available.
+ if (m_pool.exists(GenTxid::Wtxid(wtxid))) {
+ // Exact transaction already exists in the mempool.
+ auto iter = m_pool.GetIter(wtxid);
+ assert(iter != std::nullopt);
+ results.emplace(wtxid, MempoolAcceptResult::MempoolTx(iter.value()->GetTxSize(), iter.value()->GetFee()));
+ } else if (m_pool.exists(GenTxid::Txid(txid))) {
+ // Transaction with the same non-witness data but different witness (same txid,
+ // different wtxid) already exists in the mempool.
+ //
+ // We don't allow replacement transactions right now, so just swap the package
+ // transaction for the mempool one. Note that we are ignoring the validity of the
+ // package transaction passed in.
+ // TODO: allow witness replacement in packages.
+ auto iter = m_pool.GetIter(wtxid);
+ assert(iter != std::nullopt);
+ results.emplace(txid, MempoolAcceptResult::MempoolTx(iter.value()->GetTxSize(), iter.value()->GetFee()));
+ } else {
+ // Transaction does not already exist in the mempool.
+ txns_new.push_back(tx);
+ }
+ }
+
+ // Nothing to do if the entire package has already been submitted.
+ if (txns_new.empty()) return PackageMempoolAcceptResult(package_state, std::move(results));
+ // Validate the (deduplicated) transactions as a package.
+ auto submission_result = AcceptMultipleTransactions(txns_new, args);
+ // Include already-in-mempool transaction results in the final result.
+ for (const auto& [wtxid, mempoolaccept_res] : results) {
+ submission_result.m_tx_results.emplace(wtxid, mempoolaccept_res);
+ }
+ return submission_result;
+}
+
} // anon namespace
MempoolAcceptResult AcceptToMemoryPool(CChainState& active_chainstate, const CTransactionRef& tx,
@@ -1125,19 +1335,31 @@ PackageMempoolAcceptResult ProcessNewPackage(CChainState& active_chainstate, CTx
const Package& package, bool test_accept)
{
AssertLockHeld(cs_main);
- assert(test_accept); // Only allow package accept dry-runs (testmempoolaccept RPC).
assert(!package.empty());
assert(std::all_of(package.cbegin(), package.cend(), [](const auto& tx){return tx != nullptr;}));
std::vector<COutPoint> coins_to_uncache;
const CChainParams& chainparams = Params();
- auto args = MemPoolAccept::ATMPArgs::PackageTestAccept(chainparams, GetTime(), coins_to_uncache);
- const PackageMempoolAcceptResult result = MemPoolAccept(pool, active_chainstate).AcceptMultipleTransactions(package, args);
+ const auto result = [&]() EXCLUSIVE_LOCKS_REQUIRED(cs_main) {
+ AssertLockHeld(cs_main);
+ if (test_accept) {
+ auto args = MemPoolAccept::ATMPArgs::PackageTestAccept(chainparams, GetTime(), coins_to_uncache);
+ return MemPoolAccept(pool, active_chainstate).AcceptMultipleTransactions(package, args);
+ } else {
+ auto args = MemPoolAccept::ATMPArgs::PackageChildWithParents(chainparams, GetTime(), coins_to_uncache);
+ return MemPoolAccept(pool, active_chainstate).AcceptPackage(package, args);
+ }
+ }();
// Uncache coins pertaining to transactions that were not submitted to the mempool.
- for (const COutPoint& hashTx : coins_to_uncache) {
- active_chainstate.CoinsTip().Uncache(hashTx);
+ // Ensure the coins cache is still within limits.
+ if (test_accept || result.m_state.IsInvalid()) {
+ for (const COutPoint& hashTx : coins_to_uncache) {
+ active_chainstate.CoinsTip().Uncache(hashTx);
+ }
}
+ BlockValidationState state_dummy;
+ active_chainstate.FlushStateToDisk(state_dummy, FlushStateMode::PERIODIC);
return result;
}
diff --git a/src/validation.h b/src/validation.h
index 534df9bc87..ddfc8df939 100644
--- a/src/validation.h
+++ b/src/validation.h
@@ -60,6 +60,16 @@ static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 101;
static const unsigned int DEFAULT_DESCENDANT_LIMIT = 25;
/** Default for -limitdescendantsize, maximum kilobytes of in-mempool descendants */
static const unsigned int DEFAULT_DESCENDANT_SIZE_LIMIT = 101;
+
+// If a package is submitted, it must be within the mempool's ancestor/descendant limits. Since a
+// submitted package must be child-with-unconfirmed-parents (all of the transactions are an ancestor
+// of the child), package limits are ultimately bounded by mempool package limits. Ensure that the
+// defaults reflect this constraint.
+static_assert(DEFAULT_DESCENDANT_LIMIT >= MAX_PACKAGE_COUNT);
+static_assert(DEFAULT_ANCESTOR_LIMIT >= MAX_PACKAGE_COUNT);
+static_assert(DEFAULT_ANCESTOR_SIZE_LIMIT >= MAX_PACKAGE_SIZE);
+static_assert(DEFAULT_DESCENDANT_SIZE_LIMIT >= MAX_PACKAGE_SIZE);
+
/** Default for -mempoolexpiry, expiration time for mempool transactions in hours */
static const unsigned int DEFAULT_MEMPOOL_EXPIRY = 336;
/** Maximum number of dedicated script-checking threads allowed */
@@ -151,17 +161,19 @@ struct MempoolAcceptResult {
enum class ResultType {
VALID, //!> Fully validated, valid.
INVALID, //!> Invalid.
+ MEMPOOL_ENTRY, //!> Valid, transaction was already in the mempool.
};
const ResultType m_result_type;
const TxValidationState m_state;
- // The following fields are only present when m_result_type = ResultType::VALID
+ // The following fields are only present when m_result_type = ResultType::VALID or MEMPOOL_ENTRY
/** Mempool transactions replaced by the tx per BIP 125 rules. */
const std::optional<std::list<CTransactionRef>> m_replaced_transactions;
/** Virtual size as used by the mempool, calculated using serialized size and sigops. */
const std::optional<int64_t> m_vsize;
/** Raw base fees in satoshis. */
const std::optional<CAmount> m_base_fees;
+
static MempoolAcceptResult Failure(TxValidationState state) {
return MempoolAcceptResult(state);
}
@@ -170,6 +182,10 @@ struct MempoolAcceptResult {
return MempoolAcceptResult(std::move(replaced_txns), vsize, fees);
}
+ static MempoolAcceptResult MempoolTx(int64_t vsize, CAmount fees) {
+ return MempoolAcceptResult(vsize, fees);
+ }
+
// Private constructors. Use static methods MempoolAcceptResult::Success, etc. to construct.
private:
/** Constructor for failure case */
@@ -182,6 +198,10 @@ private:
explicit MempoolAcceptResult(std::list<CTransactionRef>&& replaced_txns, int64_t vsize, CAmount fees)
: m_result_type(ResultType::VALID),
m_replaced_transactions(std::move(replaced_txns)), m_vsize{vsize}, m_base_fees(fees) {}
+
+ /** Constructor for already-in-mempool case. It wouldn't replace any transactions. */
+ explicit MempoolAcceptResult(int64_t vsize, CAmount fees)
+ : m_result_type(ResultType::MEMPOOL_ENTRY), m_vsize{vsize}, m_base_fees(fees) {}
};
/**
@@ -191,7 +211,7 @@ struct PackageMempoolAcceptResult
{
const PackageValidationState m_state;
/**
- * Map from wtxid to finished MempoolAcceptResults. The client is responsible
+ * Map from (w)txid to finished MempoolAcceptResults. The client is responsible
* for keeping track of the transaction objects themselves. If a result is not
* present, it means validation was unfinished for that transaction. If there
* was a package-wide error (see result in m_state), m_tx_results will be empty.
@@ -225,16 +245,12 @@ MempoolAcceptResult AcceptToMemoryPool(CChainState& active_chainstate, const CTr
EXCLUSIVE_LOCKS_REQUIRED(cs_main);
/**
-* Atomically test acceptance of a package. If the package only contains one tx, package rules still
-* apply. Package validation does not allow BIP125 replacements, so the transaction(s) cannot spend
-* the same inputs as any transaction in the mempool.
-* @param[in] txns Group of transactions which may be independent or contain
-* parent-child dependencies. The transactions must not conflict
-* with each other, i.e., must not spend the same inputs. If any
-* dependencies exist, parents must appear anywhere in the list
-* before their children.
+* Validate (and maybe submit) a package to the mempool. See doc/policy/packages.md for full details
+* on package validation rules.
+* @param[in] test_accept When true, run validation checks but don't submit to mempool.
* @returns a PackageMempoolAcceptResult which includes a MempoolAcceptResult for each transaction.
-* If a transaction fails, validation will exit early and some results may be missing.
+* If a transaction fails, validation will exit early and some results may be missing. It is also
+* possible for the package to be partially submitted.
*/
PackageMempoolAcceptResult ProcessNewPackage(CChainState& active_chainstate, CTxMemPool& pool,
const Package& txns, bool test_accept)
diff --git a/test/functional/p2p_timeouts.py b/test/functional/p2p_timeouts.py
index 602f826782..cf714bc888 100755
--- a/test/functional/p2p_timeouts.py
+++ b/test/functional/p2p_timeouts.py
@@ -86,7 +86,7 @@ class TimeoutsTest(BitcoinTestFramework):
]
with self.nodes[0].assert_debug_log(expected_msgs=expected_timeout_logs):
- self.mock_forward(5)
+ self.mock_forward(2)
no_verack_node.wait_for_disconnect(timeout=1)
no_version_node.wait_for_disconnect(timeout=1)
no_send_node.wait_for_disconnect(timeout=1)