Age | Commit message (Collapse) | Author |
|
02a6702 Add option `-alerts` to opt out of alert system (Wladimir J. van der Laan)
|
|
Make it possible to opt-out of the centralized alert system by providing
an option `-noalerts` or `-alerts=0`. The default remains unchanged.
This is a gentler form of #6260, in which I went a bit overboard by
removing the alert system completely.
I intend to add this to the GUI options in another pull after this.
|
|
65b9454 Use best header chain timestamps to detect partitioning (Gavin Andresen)
|
|
c257a8c Prune: Support noncontiguous block files (Adam Weiss)
|
|
eb83719 Consensus: Refactor: Separate Consensus::CheckTxInputs and GetSpendHeight in CheckInputs (Jorge Timón)
|
|
CheckInputs
|
|
ContextualCheckBlockHeader
|
|
14d4eef Fix removing of orphan transactions (Alex Morcos)
|
|
dce8360 Reduce checkpoints' effect on consensus. (Pieter Wuille)
|
|
The partition checking code was using chainActive timestamps
to detect partitioning; with headers-first syncing, it should use
(and with this pull request, does use) pIndexBestHeader timestamps.
Fixes issue #6251
|
|
|
|
In some corner cases, it may be possible for recent blocks to end up in
the same block file as much older blocks. Previously, the pruning code
would stop looking for files to remove upon first encountering a file
containing a block that cannot be pruned, now it will keep looking for
candidate files until the target is met and all other criteria are
satisfied.
This can result in a noncontiguous set of block files (by number) on
disk, which is fine except for during some reindex corner cases, so
make reindex preparation smarter such that we keep the data we can
actually use and throw away the rest. This allows pruning to work
correctly while downloading any blocks needed during the reindex.
|
|
AcceptBlock will no longer process an unrequested block, unless it has not
been previously processed and has more work than chainActive.Tip()
|
|
a1ba077 Ignore getheaders requests when not synced. (Suhas Daftuar)
|
|
28bf062 Fix off-by-one error w/ nLockTime in the wallet (Peter Todd)
|
|
8273793 Eliminate compiler warning due to unused variable (Suhas Daftuar)
|
|
|
|
|
|
-UndoWriteToDisk
-WriteBlockToDisk
|
|
da29ecb Consensus: MOVEONLY: Move CValidationState from main consensus/validation (jtimon)
27afcd8 Consensus: Refactor: Decouple CValidationState from main::AbortNode() (Cory Fields)
|
|
Previously due to an off-by-one error the wallet ignored
nLockTime-by-height transactions that would be valid in the next block
even though they are accepted into the mempool. The transactions
wouldn't show up until confirmed, nor would they be included in the
unconfirmed balance. Similar to the mempool behavior fix in 665bdd3b,
the wallet code was calling IsFinalTx() directly without taking into
account the fact that doing so tells you if the transaction could have
been mined in the *current* block, rather than the next block.
To fix this we strip IsFinalTx() of non-consensus-critical
functionality, removing the default arguments, and add CheckFinalTx() to
check if a transaction will be final in the next block.
|
|
8ba7f84 Reduce download timeouts as blocks arrive (Suhas Daftuar)
|
|
36cba8f Alert if it is very likely we are getting a bad chain (Gavin Andresen)
|
|
935bd0a Chainparams: Refactor: Decouple main::GetBlockValue() from Params() [renamed GetBlockSubsidy] (Jorge Timón)
|
|
GetBlockSubsidy]
Remove redundant getter CChainParams::SubsidyHalvingInterval()
|
|
|
|
|
|
Previously this was cleared only after UnlinkPrunedFiles, but it should really be cleared after FindFilesToPrune, regardless of whether there are any files to be pruned.
|
|
51aa249 Chainparams: Refactor: Decouple IsSuperMajority from Params() (Jorge Timón)
|
|
86a5f4b Relocate calls to CheckDiskSpace (Alex Morcos)
67708ac Write block index more frequently than cache flushes (Pieter Wuille)
b3ed423 Cache tweak and logging improvements (Pieter Wuille)
fc684ad Use accurate memory for flushing decisions (Pieter Wuille)
046392d Keep track of memory usage in CCoinsViewCache (Pieter Wuille)
540629c Add memusage.h (Pieter Wuille)
|
|
Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
|
|
Instead of only checking height to decide whether to disable script checks,
actually check whether a block is an ancestor of a checkpoint, up to which
headers have been validated. This means that we don't have to prevent
accepting a side branch anymore - it will be safe, just less fast to
do.
We still need to prevent being fed a multitude of low-difficulty headers
filling up our memory. The mechanism for that is unchanged for now: once
a checkpoint is reached with headers, no headers chain branching off before
that point are allowed anymore.
|
|
|
|
b649e03 Create new BlockPolicyEstimator for fee estimates (Alex Morcos)
|
|
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
|
|
Make sure we're checking disk space for block index writes and allow for pruning to happen before chainstate writes.
|
|
We don't want to erase orphans that still have missing inputs, they should still be tracked as orphans. Also, the transaction thats being accepted can't be an orphan otherwise it would have previously been accepted, so doesn't need to be added to the erase queue.
|
|
|
|
|
|
|
|
|
|
a8cdaf5 checkpoints: move the checkpoints enable boolean into main (Cory Fields)
11982d3 checkpoints: Decouple checkpoints from Params (Cory Fields)
6996823 checkpoints: make checkpoints a member of CChainParams (Cory Fields)
9f13a10 checkpoints: store mapCheckpoints in CCheckpointData rather than a pointer (Cory Fields)
|
|
b05a89b Non-grammatical language improvements (Luke Dashjr)
7e6d23b Bugfix: Grammar fixes (Corinne Dashjr)
|
|
|
|
|
|
This pertains to app-state, so it doesn't make sense to handle inside the
checkpoint functions.
|
|
Pass checkpoint data in as necessary
|
|
Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
|
|
f7303f9 Use equivalent PoW for non-main-chain requests (Pieter Wuille)
|
|
00dcaf4 Change download logic to allow calling getheaders/getdata on inbound peers (Suhas Daftuar)
|