Age | Commit message (Collapse) | Author |
|
|
|
|
|
0bd581a add release notes for removal of priority estimation (Alex Morcos)
b2322e0 Remove priority estimation (Alex Morcos)
|
|
|
|
Three categories of modifications:
1)
1 instance of 'The Bitcoin Core developers \n',
1 instance of 'the Bitcoin Core developers\n',
3 instances of 'Bitcoin Core Developers\n', and
12 instances of 'The Bitcoin developers\n'
are made uniform with the 443 instances of 'The Bitcoin Core developers\n'
2)
3 instances of 'BitPay, Inc\.\n' are made uniform with the other 6
instances of 'BitPay Inc\.\n'
3)
4 instances where there was no '(c)' between the 'Copyright' and the year
where it deviates from the style of the local directory.
|
|
5eaaa83 Kill insecure_random and associated global state (Wladimir J. van der Laan)
|
|
There are only a few uses of `insecure_random` outside the tests.
This PR replaces uses of insecure_random (and its accompanying global
state) in the core code with an FastRandomContext that is automatically
seeded on creation.
This is meant to be used for inner loops. The FastRandomContext
can be in the outer scope, or the class itself, then rand32() is used
inside the loop. Useful e.g. for pushing addresses in CNode or the fee
rounding, or randomization for coin selection.
As a context is created per purpose, thus it gets rid of
cross-thread unprotected shared usage of a single set of globals, this
should also get rid of the potential race conditions.
- I'd say TxMempool::check is not called enough to warrant using a special
fast random context, this is switched to GetRand() (open for
discussion...)
- The use of `insecure_rand` in ConnectThroughProxy has been replaced by
an atomic integer counter. The only goal here is to have a different
credentials pair for each connection to go on a different Tor circuit,
it does not need to be random nor unpredictable.
- To avoid having a FastRandomContext on every CNode, the context is
passed into PushAddress as appropriate.
There remains an insecure_random for test usage in `test_random.h`.
|
|
uncompressed keys for segwit scripts
|
|
|
|
|
|
|
|
c59c434 qa: Add test for standardness of segwit v0 outputs (Suhas Daftuar)
1ffaff2 Make witness v0 outputs non-standard before segwit activation (Johnson Lau)
|
|
|
|
|
|
|
|
|
|
|
|
Includes changes by Suhas Daftuar, Luke-jr, and mruddy.
|
|
Includes simplifications by Eric Lombrozo.
|
|
|
|
4f7c959 Refactor IsRBFOptIn, avoid exception (Jonas Schnelli)
|
|
03c77fd Doc: Update isStandardTx comment (Matthew English)
|
|
|
|
|
|
|
|
71527a0 Test of BIP9 fork activation of mtp, csv, sequence_lock (NicolasDorier)
19d73d5 Add RPC test for BIP 68/112/113 soft fork. (Alex Morcos)
12c89c9 Policy: allow transaction version 2 relay policy. (BtcDrak)
02c2435 Soft fork logic for BIP68 (BtcDrak)
478fba6 Soft fork logic for BIP113 (BtcDrak)
65751a3 Add CHECKSEQUENCEVERIFY softfork through BIP9 (Pieter Wuille)
|
|
The "feefilter" p2p message is used to inform other nodes of your mempool min fee which is the feerate that any new transaction must meet to be accepted to your mempool. This will allow them to filter invs to you according to this feerate.
|
|
This commit introduces a way to gracefully bump the default
transaction version in a two step process.
|
|
- Replace NOP3 with CHECKSEQUENCEVERIFY (BIP112)
<nSequence> CHECKSEQUENCEVERIFY -> <nSequence>
- Fails if txin.nSequence < nSequence, allowing funds of a txout to be locked for a number of blocks or a duration of time after its inclusion in a block.
- Pull most of CheckLockTime() out into VerifyLockTime(), a local function that will be reused for CheckSequence()
- Add bitwise AND operator to CScriptNum
- Enable CHECKSEQUENCEVERIFY as a standard script verify flag
- Transactions that fail CSV verification will be rejected from the mempool, making it easy to test the feature. However blocks containing "invalid" CSV-using transactions will still be accepted; this is *not* the soft-fork required to actually enable CSV for production use.
|
|
b043c4b fix sdaftuar's nits again (Alex Morcos)
a51c79b Bug fix to RPC test (Alex Morcos)
da6ad5f Add RPC test exercising BIP68 (mempool only) (Suhas Daftuar)
c6c2f0f Implement SequenceLocks functions (Alex Morcos)
|
|
it boggles the mind why these nits can't be delivered on a more timely basis
|
|
SequenceLocks functions are used to evaluate sequence lock times or heights per BIP 68.
The majority of this code is copied from maaku in #6312
Further credit: btcdrak, sipa, NicolasDorier
|
|
(cherry picked from commit 52b29dca7670c3f6d2ab918c0fff1d17c4e494ad)
|
|
Add "bip125-replaceable" output field to listtransactions and gettransaction
which indicates if an unconfirmed transaction, or any unconfirmed parent, is
signaling opt-in RBF according to BIP 125.
|
|
|
|
6cd198f Removed comment about IsStandard for P2SH scripts (Marcel Krüger)
|
|
Since #4365 (62599373883a66a958136f48ab0e2b826e3d5bf8) P2SH scripts do not have to be IsStandard scripts.
|
|
|
|
Make RPC tests have a default block priority size of 50000 (the old default) so we can still use free transactions in RPC tests. When priority is eliminated, we will have to make a different change if we want to continue allowing free txs.
|
|
|
|
|
|
|
|
This provides more conservative estimates and reacts more quickly to a backlog.
Unfortunately the unit test for fee estimation depends on the success threshold (and the decay) chosen; also modify the unit test for the new default success thresholds.
|
|
These are more useful fee and priority estimation functions. If there is no fee/pri high enough for the target you are aiming for, it will give you the estimate for the lowest target that you can reliably obtain. This is better than defaulting to the minimum. It will also pass back the target for which it returned an answer.
|
|
of lock-time constraints""
This reverts commit 8537ecdfc40181249ec37556015a99cfae4b21fd.
|
|
Revert "Revert "Add rules--presently disabled--for using GetMedianTimePast as endpoint for lock-time calculations""
This reverts commit 40cd32e835092c3158175511da5193193ec54939.
After careful analysis it was determined that the change was, in fact, safe and several people were suffering
momentary confusion about locktime semantics.
|
|
endpoint for lock-time calculations"
This reverts commit 9d55050773d57c0e12005e524f2e54d9e622c6e2.
As noted by Luke-Jr, under some conditions this will accept transactions which are invalid by the network
rules. This happens when the current block time is head of the median time past and a transaction's
locktime is in the middle.
This could be addressed by changing the rule to MAX(this_block_time, MTP+offset) but this solution and
the particular offset used deserve some consideration.
|
|
lock-time constraints"
This reverts commit dea8d21fc63e9f442299c97010e4740558f4f037.
|
|
- ensure header namespaces and end comments are correct
- add missing header end comments
- ensure minimal formatting (add newlines etc.)
|
|
constraints
Transactions are not allowed in the memory pool or selected for inclusion in a block until their lock times exceed chainActive.Tip()->GetMedianTimePast(). However blocks including transactions which are only mature under the old rules are still accepted; this is *not* the soft-fork required to actually rely on the new constraint in production.
|