aboutsummaryrefslogtreecommitdiff
path: root/src/policy
AgeCommit message (Collapse)Author
2016-02-14BIP112: Implement CHECKSEQUENCEVERIFYMark Friedenbach
- Replace NOP3 with CHECKSEQUENCEVERIFY (BIP112) <nSequence> CHECKSEQUENCEVERIFY -> <nSequence> - Fails if txin.nSequence < nSequence, allowing funds of a txout to be locked for a number of blocks or a duration of time after its inclusion in a block. - Pull most of CheckLockTime() out into VerifyLockTime(), a local function that will be reused for CheckSequence() - Add bitwise AND operator to CScriptNum - Enable CHECKSEQUENCEVERIFY as a standard script verify flag - Transactions that fail CSV verification will be rejected from the mempool, making it easy to test the feature. However blocks containing "invalid" CSV-using transactions will still be accepted; this is *not* the soft-fork required to actually enable CSV for production use.
2016-02-12Merge #7184: Implement SequenceLocks functions for BIP 68Wladimir J. van der Laan
b043c4b fix sdaftuar's nits again (Alex Morcos) a51c79b Bug fix to RPC test (Alex Morcos) da6ad5f Add RPC test exercising BIP68 (mempool only) (Suhas Daftuar) c6c2f0f Implement SequenceLocks functions (Alex Morcos)
2016-02-11fix sdaftuar's nits againAlex Morcos
it boggles the mind why these nits can't be delivered on a more timely basis
2016-02-10Implement SequenceLocks functionsAlex Morcos
SequenceLocks functions are used to evaluate sequence lock times or heights per BIP 68. The majority of this code is copied from maaku in #6312 Further credit: btcdrak, sipa, NicolasDorier
2016-02-01Get rid of inaccurate ScriptSigArgsExpectedPieter Wuille
(cherry picked from commit 52b29dca7670c3f6d2ab918c0fff1d17c4e494ad)
2016-01-19RPC: indicate which transactions are replaceableSuhas Daftuar
Add "bip125-replaceable" output field to listtransactions and gettransaction which indicates if an unconfirmed transaction, or any unconfirmed parent, is signaling opt-in RBF according to BIP 125.
2016-01-17Typo fixes in commentsChris Wheeler
2016-01-07Merge pull request #7266Wladimir J. van der Laan
6cd198f Removed comment about IsStandard for P2SH scripts (Marcel Krüger)
2015-12-30Removed comment about IsStandard for P2SH scriptsMarcel Krüger
Since #4365 (62599373883a66a958136f48ab0e2b826e3d5bf8) P2SH scripts do not have to be IsStandard scripts.
2015-12-13Bump copyright headers to 2015MarcoFalke
2015-11-30Change default block priority size to 0Alex Morcos
Make RPC tests have a default block priority size of 50000 (the old default) so we can still use free transactions in RPC tests. When priority is eliminated, we will have to make a different change if we want to continue allowing free txs.
2015-11-28Constrain constant values to a single location in codeLuke Dashjr
2015-11-24Pass reference to estimateSmartFee and cleanup whitespaceSuhas Daftuar
2015-11-16EstimateSmart functions consider mempool min feeAlex Morcos
2015-11-16Increase success threshold for fee estimation to 95%Alex Morcos
This provides more conservative estimates and reacts more quickly to a backlog. Unfortunately the unit test for fee estimation depends on the success threshold (and the decay) chosen; also modify the unit test for the new default success thresholds.
2015-11-16Add smart fee estimation functionsAlex Morcos
These are more useful fee and priority estimation functions. If there is no fee/pri high enough for the target you are aiming for, it will give you the estimate for the lowest target that you can reliably obtain. This is better than defaulting to the minimum. It will also pass back the target for which it returned an answer.
2015-11-03Revert "Revert "Enable policy enforcing GetMedianTimePast as the end point ↵Gregory Maxwell
of lock-time constraints"" This reverts commit 8537ecdfc40181249ec37556015a99cfae4b21fd.
2015-11-03Restore MedianTimePast for locktime.Gregory Maxwell
Revert "Revert "Add rules--presently disabled--for using GetMedianTimePast as endpoint for lock-time calculations"" This reverts commit 40cd32e835092c3158175511da5193193ec54939. After careful analysis it was determined that the change was, in fact, safe and several people were suffering momentary confusion about locktime semantics.
2015-11-01Revert "Add rules--presently disabled--for using GetMedianTimePast as ↵Gregory Maxwell
endpoint for lock-time calculations" This reverts commit 9d55050773d57c0e12005e524f2e54d9e622c6e2. As noted by Luke-Jr, under some conditions this will accept transactions which are invalid by the network rules. This happens when the current block time is head of the median time past and a transaction's locktime is in the middle. This could be addressed by changing the rule to MAX(this_block_time, MTP+offset) but this solution and the particular offset used deserve some consideration.
2015-11-01Revert "Enable policy enforcing GetMedianTimePast as the end point of ↵Gregory Maxwell
lock-time constraints" This reverts commit dea8d21fc63e9f442299c97010e4740558f4f037.
2015-10-27[Trivial] ensure minimal header conventionsPhilip Kaufmann
- ensure header namespaces and end comments are correct - add missing header end comments - ensure minimal formatting (add newlines etc.)
2015-10-23Enable policy enforcing GetMedianTimePast as the end point of lock-time ↵Mark Friedenbach
constraints Transactions are not allowed in the memory pool or selected for inclusion in a block until their lock times exceed chainActive.Tip()->GetMedianTimePast(). However blocks including transactions which are only mature under the old rules are still accepted; this is *not* the soft-fork required to actually rely on the new constraint in production.
2015-10-23Add rules--presently disabled--for using GetMedianTimePast as endpoint for ↵Mark Friedenbach
lock-time calculations The lock-time code currently uses CBlock::nTime as the cutoff point for time based locked transactions. This has the unfortunate outcome of creating a perverse incentive for miners to lie about the time of a block in order to collect more fees by including transactions that by wall clock determination have not yet matured. By using CBlockIndex::GetMedianTimePast from the prior block instead, the self-interested miner no longer gains from generating blocks with fraudulent timestamps. Users can compensate for this change by simply adding an hour (3600 seconds) to their time-based lock times. If enforced, this would be a soft-fork change. This commit only adds the functionality on an unexecuted code path, without changing the behaviour of Bitcoin Core.
2015-10-06Test LowS in standardness, removes nuisance malleability vector.Gregory Maxwell
This adds SCRIPT_VERIFY_LOW_S to STANDARD_SCRIPT_VERIFY_FLAGS which will make the node require the canonical 'low-s' encoding for ECDSA signatures when relaying or mining. Consensus behavior is unchanged. The rational is explained in a81cd96805ce6b65cca3a40ebbd3b2eb428abb7b: Absent this kind of test ECDSA is not a strong signature as given a valid signature {r, s} both that value and {r, -s mod n} are valid. These two encodings have different hashes allowing third parties a vector to change users txids. These attacks are avoided by picking a particular form as canonical and rejecting the other form(s); in the of the LOW_S rule, the smaller of the two possible S values is used. If widely deployed this change would eliminate the last remaining known vector for nuisance malleability on boring SIGHASH_ALL p2pkh transactions. On the down-side it will block most transactions made by sufficiently out of date software. Unlike the other avenues to change txids on boring transactions this one was randomly violated by all deployed bitcoin software prior to its discovery. So, while other malleability vectors where made non-standard as soon as they were discovered, this one has remained permitted. Even BIP62 did not propose applying this rule to old version transactions, but conforming implementations have become much more common since BIP62 was initially written. Bitcoin Core has produced compatible signatures since a28fb70e in September 2013, but this didn't make it into a release until 0.9 in March 2014; Bitcoinj has done so for a similar span of time. Bitcoinjs and electrum have been more recently updated. This does not replace the need for BIP62 or similar, as miners can still cooperate to break transactions. Nor does it replace the need for wallet software to handle malleability sanely[1]. This only eliminates the cheap and irritating DOS attack. [1] On the Malleability of Bitcoin Transactions Marcin Andrychowicz, Stefan Dziembowski, Daniel Malinowski, Łukasz Mazurek http://fc15.ifca.ai/preproceedings/bitcoin/paper_9.pdf
2015-10-01Accept any sequence of PUSHDATAs in OP_RETURN outputsPeter Todd
Previously only one PUSHDATA was allowed, needlessly limiting applications such as matching OP_RETURN contents with bloom filters that operate on a per-PUSHDATA level. Now any combination that passes IsPushOnly() is allowed, so long as the total size of the scriptPubKey is less than 42 bytes. (unchanged modulo non-minimal PUSHDATA encodings) Also, this fixes the odd bug where previously the PUSHDATA could be replaced by any single opcode, even sigops consuming opcodes such as CHECKMULTISIG. (20 sigops!)
2015-08-10typofixes (found by misspell_fixer)Veres Lajos
2015-08-03Make sure LogPrintf strings are line-terminatedWladimir J. van der Laan
Fix the cases where LogPrint[f] was accidentally called without line terminator, which resulted in concatenated log lines. (see e.g. #6492)
2015-06-26Policy: MOVEONLY: 3 functions to policy.o:Luke Dashjr
- [script/standard.o] IsStandard - [main.o] IsStandardTx - [main.o] AreInputsStandard Also, don't use namespace std in policy.cpp
2015-06-26Policy: MOVEONLY: Create policy/policy.h with some constantsJorge Timón
2015-05-13Create new BlockPolicyEstimator for fee estimatesAlex Morcos
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks. -Eliminate txs which didn't have all inputs available at entry from fee/pri calcs -Add dynamic breakpoints and tracking of confirmation delays in mempool transactions -Remove old CMinerPolicyEstimator and CBlockAverage code -New smartfees.py -Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included. -Add a policyestimator unit test