summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.mediawiki75
-rw-r--r--bip-0002.mediawiki2
-rw-r--r--bip-0013.mediawiki4
-rw-r--r--bip-0032.mediawiki2
-rw-r--r--bip-0039.mediawiki10
-rw-r--r--bip-0044.mediawiki2
-rw-r--r--bip-0049.mediawiki12
-rw-r--r--bip-0065.mediawiki2
-rw-r--r--bip-0084.mediawiki92
-rw-r--r--bip-0098.mediawiki308
-rwxr-xr-xbip-0098/build.sh6
-rw-r--r--bip-0098/node-variants.dot85
-rw-r--r--bip-0098/node-variants.pngbin0 -> 105569 bytes
-rw-r--r--bip-0098/skip-skip.dot7
-rw-r--r--bip-0098/skip-skip.pngbin0 -> 9434 bytes
-rw-r--r--bip-0098/traversal-example.dot32
-rw-r--r--bip-0098/traversal-example.pngbin0 -> 60703 bytes
-rw-r--r--bip-0098/unbalanced-hash-tree.dot11
-rw-r--r--bip-0098/unbalanced-hash-tree.pngbin0 -> 22836 bytes
-rw-r--r--bip-0115.mediawiki2
-rw-r--r--bip-0116.mediawiki145
-rw-r--r--bip-0117.mediawiki196
-rw-r--r--bip-0120.mediawiki2
-rw-r--r--bip-0121.mediawiki2
-rw-r--r--bip-0125.mediawiki6
-rw-r--r--bip-0141.mediawiki2
-rw-r--r--bip-0152.mediawiki2
-rw-r--r--bip-0157.mediawiki471
-rw-r--r--bip-0158.mediawiki431
-rw-r--r--bip-0159.mediawiki28
-rw-r--r--bip-0173.mediawiki37
-rw-r--r--bip-0174.mediawiki528
-rw-r--r--bip-0174/coinjoin-workflow.pngbin0 -> 45999 bytes
-rw-r--r--bip-0174/multisig-workflow.pngbin0 -> 75935 bytes
-rw-r--r--bip-0175.mediawiki259
-rw-r--r--bip-0176.mediawiki57
-rwxr-xr-xscripts/buildtable.pl5
37 files changed, 2768 insertions, 55 deletions
diff --git a/README.mediawiki b/README.mediawiki
index df92703..eb3b0e7 100644
--- a/README.mediawiki
+++ b/README.mediawiki
@@ -1,4 +1,4 @@
-People wishing to submit BIPs, first should propose their idea or document to the mailing list. After discussion they should email Luke Dashjr <luke_bipeditor@dashjr.org>. After copy-editing and acceptance, it will be published here.
+People wishing to submit BIPs, first should propose their idea or document to the [https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev bitcoin-dev@lists.linuxfoundation.org] mailing list. After discussion, please open a PR. After copy-editing and acceptance, it will be published here.
We are fairly liberal with approving BIPs, and try not to be too involved in decision making on behalf of the community. The exception is in very rare cases of dispute resolution when a decision is contentious and cannot be agreed upon. In those cases, the conservative option will always be preferred.
@@ -407,6 +407,13 @@ Those proposing changes should consider that ultimately consent may rest with th
| Standard
| Draft
|-
+| [[bip-0084.mediawiki|84]]
+| Applications
+| Derivation scheme for P2WPKH based accounts
+| Pavol Rusnak
+| Informational
+| Draft
+|-
| [[bip-0090.mediawiki|90]]
| Consensus (hard fork)
| Buried Deployments
@@ -421,6 +428,13 @@ Those proposing changes should consider that ultimately consent may rest with th
| Standard
| Final
|-
+| [[bip-0098.mediawiki|98]]
+| Consensus (soft fork)
+| Fast Merkle Trees
+| Mark Friedenbach, Kalle Alm, BtcDrak
+| Standard
+| Draft
+|-
| [[bip-0099.mediawiki|99]]
|
| Motivation and deployment of consensus rule changes ([soft/hard]forks)
@@ -519,19 +533,33 @@ Those proposing changes should consider that ultimately consent may rest with th
| Standard
| Draft
|-
+| [[bip-0116.mediawiki|116]]
+| Consensus (soft fork)
+| MERKLEBRANCHVERIFY
+| Mark Friedenbach, Kalle Alm, BtcDrak
+| Standard
+| Draft
+|-
+| [[bip-0117.mediawiki|117]]
+| Consensus (soft fork)
+| Tail Call Execution Semantics
+| Mark Friedenbach, Kalle Alm, BtcDrak
+| Standard
+| Draft
+|- style="background-color: #ffcfcf"
| [[bip-0120.mediawiki|120]]
| Applications
| Proof of Payment
| Kalle Rosenbaum
| Standard
-| Draft
-|-
+| Withdrawn
+|- style="background-color: #ffcfcf"
| [[bip-0121.mediawiki|121]]
| Applications
| Proof of Payment URI scheme
| Kalle Rosenbaum
| Standard
-| Draft
+| Withdrawn
|-
| [[bip-0122.mediawiki|122]]
| Applications
@@ -708,9 +736,23 @@ Those proposing changes should consider that ultimately consent may rest with th
| Standard
| Draft
|-
+| [[bip-0157.mediawiki|157]]
+| Peer Services
+| Client Side Block Filtering
+| Olaoluwa Osuntokun, Alex Akselrod, Jim Posen
+| Standard
+| Draft
+|-
+| [[bip-0158.mediawiki|158]]
+| Peer Services
+| Compact Block Filters for Light Clients
+| Olaoluwa Osuntokun, Alex Akselrod
+| Standard
+| Draft
+|-
| [[bip-0159.mediawiki|159]]
| Peer Services
-| NODE_NETWORK_LIMITED service bits
+| NODE_NETWORK_LIMITED service bit
| Jonas Schnelli
| Standard
| Draft
@@ -721,12 +763,33 @@ Those proposing changes should consider that ultimately consent may rest with th
| Luke Dashjr
| Standard
| Draft
-|-
+|- style="background-color: #ffffcf"
| [[bip-0173.mediawiki|173]]
| Applications
| Base32 address format for native v0-16 witness outputs
| Pieter Wuille, Greg Maxwell
| Informational
+| Proposed
+|-
+| [[bip-0174.mediawiki|174]]
+| Applications
+| Partially Signed Bitcoin Transaction Format
+| Andrew Chow
+| Standard
+| Draft
+|-
+| [[bip-0175.mediawiki|175]]
+| Applications
+| Pay to Contract Protocol
+| Omar Shibli, Nicholas Gregory
+| Informational
+| Draft
+|-
+| [[bip-0176.mediawiki|176]]
+|
+| Bits Denomination
+| Jimmy Song
+| Informational
| Draft
|-
| [[bip-0180.mediawiki|180]]
diff --git a/bip-0002.mediawiki b/bip-0002.mediawiki
index ea60d1d..b4567c4 100644
--- a/bip-0002.mediawiki
+++ b/bip-0002.mediawiki
@@ -208,7 +208,7 @@ Peer services BIPs should be observed to be adopted by at least 1% of public lis
API/RPC and application layer BIPs must be implemented by at least two independent and compatible software applications.
-Software authors are encouraged to publish summaries of what BIPs their software supports to aid in verification of status changes. Good examples of this at the time of writing this BIP, can be observed in [https://github.com/bitcoin/bitcoin/blob/master/doc/bips.md Bitcoin Core's doc/bips.md file] as well as [https://github.com/schildbach/bitcoin-wallet/blob/master/wallet/README.specs Bitcoin Wallet for Android's wallet/README.specs file].
+Software authors are encouraged to publish summaries of what BIPs their software supports to aid in verification of status changes. Good examples of this at the time of writing this BIP, can be observed in [https://github.com/bitcoin/bitcoin/blob/master/doc/bips.md Bitcoin Core's doc/bips.md file] as well as [https://github.com/bitcoin-wallet/bitcoin-wallet/blob/master/wallet/README.specs.md Bitcoin Wallet for Android's wallet/README.specs.md file].
These criteria are considered objective ways to observe the de facto adoption of the BIP, and are not to be used as reasons to oppose or reject a BIP. Should a BIP become actually and unambiguously adopted despite not meeting the criteria outlined here, it should still be updated to Final status.
diff --git a/bip-0013.mediawiki b/bip-0013.mediawiki
index 9805ed0..081ea2a 100644
--- a/bip-0013.mediawiki
+++ b/bip-0013.mediawiki
@@ -14,7 +14,7 @@
This BIP describes a new type of Bitcoin address to support arbitrarily complex transactions. Complexity in this context is defined as what information is needed by the recipient to respend the received coins, in contrast to needing a single ECDSA private key as in current implementations of Bitcoin.
-In essence, an address encoded under this proposal represents the encoded hash of a [[script]], rather than the encoded hash of an ECDSA public key.
+In essence, an address encoded under this proposal represents the encoded hash of a [https://en.bitcoin.it/wiki/Script script], rather than the encoded hash of an ECDSA public key.
==Motivation==
@@ -22,7 +22,7 @@ Enable "end-to-end" secure wallets and payments to fund escrow transactions or o
==Specification==
-The new bitcoin address type is constructed in the same manner as existing bitcoin addresses (see [[Base58Check encoding]]):
+The new bitcoin address type is constructed in the same manner as existing bitcoin addresses (see [https://en.bitcoin.it/Base58Check_encoding Base58Check encoding]):
base58-encode: [one-byte version][20-byte hash][4-byte checksum]
diff --git a/bip-0032.mediawiki b/bip-0032.mediawiki
index 6eba5f9..ab6ff9e 100644
--- a/bip-0032.mediawiki
+++ b/bip-0032.mediawiki
@@ -122,7 +122,7 @@ Each leaf node in the tree corresponds to an actual key, while the internal node
===Key identifiers===
-Extended keys can be identified by the Hash160 (RIPEMD160 after SHA256) of the serialized ECSDA public key K, ignoring the chain code. This corresponds exactly to the data used in traditional Bitcoin addresses. It is not advised to represent this data in base58 format though, as it may be interpreted as an address that way (and wallet software is not required to accept payment to the chain key itself).
+Extended keys can be identified by the Hash160 (RIPEMD160 after SHA256) of the serialized ECDSA public key K, ignoring the chain code. This corresponds exactly to the data used in traditional Bitcoin addresses. It is not advised to represent this data in base58 format though, as it may be interpreted as an address that way (and wallet software is not required to accept payment to the chain key itself).
The first 32 bits of the identifier are called the key fingerprint.
diff --git a/bip-0039.mediawiki b/bip-0039.mediawiki
index c18d7de..84c09a0 100644
--- a/bip-0039.mediawiki
+++ b/bip-0039.mediawiki
@@ -134,6 +134,12 @@ http://github.com/trezor/python-mnemonic
==Other Implementations==
+Go:
+* https://github.com/tyler-smith/go-bip39
+
+Elixir:
+* https://github.com/izelnakri/mnemonic
+
Objective-C:
* https://github.com/nybex/NYMnemonic
@@ -153,8 +159,12 @@ JavaScript:
Ruby:
* https://github.com/sreekanthgs/bip_mnemonic
+Rust:
+* https://github.com/infincia/bip39-rs
+
Swift:
* https://github.com/CikeQiu/CKMnemonic
+* https://github.com/yuzushioh/WalletKit
C++:
* https://github.com/libbitcoin/libbitcoin/blob/master/include/bitcoin/bitcoin/wallet/mnemonic.hpp
diff --git a/bip-0044.mediawiki b/bip-0044.mediawiki
index 5ee2209..4735e27 100644
--- a/bip-0044.mediawiki
+++ b/bip-0044.mediawiki
@@ -269,7 +269,7 @@ is required and a pull request to the above file should be created.
* [[https://copay.io/|Copay]] ([[https://github.com/bitpay/copay|source]])
* [[https://www.coinvault.io/|CoinVault]] ([[https://github.com/CoinVault/dotblock|source]])
* [[https://samouraiwallet.com/|Samourai Wallet]] ([[https://github.com/Samourai-Wallet/samourai-wallet-android|source]])
-
+* [[https://coinomi.com/|Coinomi]] ([[https://github.com/Coinomi/coinomi-android|source]])
* [[https://trezor.io/|TREZOR]] ([[https://github.com/trezor/|source]])
* [[https://www.keepkey.com/|KeepKey]] ([[https://github.com/keepkey/|source]])
* [[https://www.ledgerwallet.com/|Ledger Wallet]] ([[https://github.com/LedgerHQ|source]])
diff --git a/bip-0049.mediawiki b/bip-0049.mediawiki
index 109fde8..74645a1 100644
--- a/bip-0049.mediawiki
+++ b/bip-0049.mediawiki
@@ -20,19 +20,19 @@ This BIP defines the derivation scheme for HD wallets using the P2WPKH-nested-in
With the usage of P2WPKH-nested-in-P2SH ([[bip-0141.mediawiki#p2wpkh-nested-in-bip16-p2sh|BIP 141]]) transactions it is necessary to have a common derivation scheme.
It allows the user to use different HD wallets with the same masterseed and/or a single account seamlessly.
-Thus the user needs to create a dedicated segregate witness accounts, which ensures that only wallets compatible with this BIP
-will detect the account and handle them appropriately.
+Thus the user needs to create dedicated segregated witness accounts, which ensures that only wallets compatible with this BIP
+will detect the accounts and handle them appropriately.
===Considerations===
Two generally different approaches are possible for current BIP44 capable wallets:
-1) Allow the user to use the same account(s) that they already uses, but add segregated witness encoded addresses to it
+1) Allow the user to use the same account(s) that they already uses, but add segregated witness encoded addresses to it.
1.1) Use the same public keys as defined in BIP44, but in addition to the normal P2PKH address also derive the P2SH address from it.
1.2) Use the same account root, but branch off and derive different external and internal chain roots to derive dedicated public keys for the segregated witness addresses.
-2) Create dedicated accounts only used for segregated witness addresses.
+2) Create dedicated accounts used only for segregated witness addresses.
The solutions from point 1 have a common disadvantage: if a user imports/recovers a BIP49-compatible wallet masterseed into/in a non-BIP49-compatible wallet, the account might show up but also it might miss some UTXOs.
@@ -53,7 +53,7 @@ serialization method.
m / purpose' / coin_type' / account' / change / address_index
</pre>
-For the `purpose`-path level it uses `49'`. The rest of the levels are used as defined in BIP44
+For the `purpose`-path level it uses `49'`. The rest of the levels are used as defined in BIP44.
===Address derivation===
@@ -68,7 +68,7 @@ To derive the P2SH address from the above calculated public key, we use the enca
==Backwards Compatibility==
-This BIP is not backwards compatible by design as described under [#considerations]. A not compatible wallet will not discover accounts at all and the user will notice that something is wrong.
+This BIP is not backwards compatible by design as described under [#considerations]. An incompatible wallet will not discover accounts at all and the user will notice that something is wrong.
==Test vectors==
diff --git a/bip-0065.mediawiki b/bip-0065.mediawiki
index 065eb15..097751c 100644
--- a/bip-0065.mediawiki
+++ b/bip-0065.mediawiki
@@ -136,7 +136,7 @@ transaction is created, tx3, to ensure that should the payee vanish the payor
can get their deposit back. The process by which the refund transaction is
created is currently vulnerable to transaction malleability attacks, and
additionally, requires the payor to store the refund. Using the same
-scriptPubKey from as in the Two-factor wallets example solves both these issues.
+scriptPubKey form as in the Two-factor wallets example solves both these issues.
===Trustless Payments for Publishing Data===
diff --git a/bip-0084.mediawiki b/bip-0084.mediawiki
new file mode 100644
index 0000000..340dff2
--- /dev/null
+++ b/bip-0084.mediawiki
@@ -0,0 +1,92 @@
+<pre>
+ BIP: 84
+ Layer: Applications
+ Title: Derivation scheme for P2WPKH based accounts
+ Author: Pavol Rusnak <stick@satoshilabs.com>
+ Comments-Summary: No comments yet.
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0084
+ Status: Draft
+ Type: Informational
+ Created: 2017-12-28
+ License: CC0-1.0
+</pre>
+
+==Abstract==
+
+This BIP defines the derivation scheme for HD wallets using the P2WPKH ([[bip-0173.mediawiki|BIP 173]]) serialization format for segregated witness transactions.
+
+==Motivation==
+
+With the usage of P2WPKH transactions it is necessary to have a common derivation scheme.
+It allows the user to use different HD wallets with the same masterseed and/or a single account seamlessly.
+
+Thus the user needs to create dedicated segregated witness accounts, which ensures that only wallets compatible with this BIP will detect the accounts and handle them appropriately.
+
+===Considerations===
+
+We use the same rationale as described in Considerations section of [[bip-0049.mediawiki|BIP 49]].
+
+==Specifications==
+
+This BIP defines the two needed steps to derive multiple deterministic addresses based on a [[bip-0032.mediawiki|BIP 32]] root account.
+
+===Public key derivation===
+
+To derive a public key from the root account, this BIP uses the same account-structure as defined in [[bip-0044.mediawiki|BIP 44]] and [[bip-0049.mediawiki|BIP 49]], but only uses a different purpose value to indicate the different transaction serialization method.
+
+<pre>
+m / purpose' / coin_type' / account' / change / address_index
+</pre>
+
+For the <code>purpose</code>-path level it uses <code>84'</code>. The rest of the levels are used as defined in BIP44 or BIP49.
+
+
+===Address derivation===
+
+To derive the P2WPKH address from the above calculated public key, we use the encapsulation defined in [[bip-0141.mediawiki#p2wpkh|BIP 141]]:
+
+
+ witness: <signature> <pubkey>
+ scriptSig: (empty)
+ scriptPubKey: 0 <20-byte-key-hash>
+ (0x0014{20-byte-key-hash})
+
+==Backwards Compatibility==
+
+This BIP is not backwards compatible by design as described under [#considerations]. An incompatible wallet will not discover accounts at all and the user will notice that something is wrong.
+
+==Test vectors==
+
+<pre>
+ mnemonic = abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about
+ rootpriv = zprvAWgYBBk7JR8Gjrh4UJQ2uJdG1r3WNRRfURiABBE3RvMXYSrRJL62XuezvGdPvG6GFBZduosCc1YP5wixPox7zhZLfiUm8aunE96BBa4Kei5
+ rootpub = zpub6jftahH18ngZxLmXaKw3GSZzZsszmt9WqedkyZdezFtWRFBZqsQH5hyUmb4pCEeZGmVfQuP5bedXTB8is6fTv19U1GQRyQUKQGUTzyHACMF
+
+ // Account 0, root = m/84'/0'/0'
+ xpriv = zprvAdG4iTXWBoARxkkzNpNh8r6Qag3irQB8PzEMkAFeTRXxHpbF9z4QgEvBRmfvqWvGp42t42nvgGpNgYSJA9iefm1yYNZKEm7z6qUWCroSQnE
+ xpub = zpub6rFR7y4Q2AijBEqTUquhVz398htDFrtymD9xYYfG1m4wAcvPhXNfE3EfH1r1ADqtfSdVCToUG868RvUUkgDKf31mGDtKsAYz2oz2AGutZYs
+
+ // Account 0, first receiving address = m/84'/0'/0'/0/0
+ privkey = KyZpNDKnfs94vbrwhJneDi77V6jF64PWPF8x5cdJb8ifgg2DUc9d
+ pubkey = 0330d54fd0dd420a6e5f8d3624f5f3482cae350f79d5f0753bf5beef9c2d91af3c
+ address = bc1qcr8te4kr609gcawutmrza0j4xv80jy8z306fyu
+
+ // Account 0, second receiving address = m/84'/0'/0'/0/1
+ privkey = Kxpf5b8p3qX56DKEe5NqWbNUP9MnqoRFzZwHRtsFqhzuvUJsYZCy
+ pubkey = 03e775fd51f0dfb8cd865d9ff1cca2a158cf651fe997fdc9fee9c1d3b5e995ea77
+ address = bc1qnjg0jd8228aq7egyzacy8cys3knf9xvrerkf9g
+
+ // Account 0, first change address = m/84'/0'/0'/1/0
+ privkey = KxuoxufJL5csa1Wieb2kp29VNdn92Us8CoaUG3aGtPtcF3AzeXvF
+ pubkey = 03025324888e429ab8e3dbaf1f7802648b9cd01e9b418485c5fa4c1b9b5700e1a6
+ address = bc1q8c6fshw2dlwun7ekn9qwf37cu2rn755upcp6el
+</pre>
+
+==Reference==
+
+* [[bip-0032.mediawiki|BIP32 - Hierarchical Deterministic Wallets]]
+* [[bip-0043.mediawiki|BIP43 - Purpose Field for Deterministic Wallets]]
+* [[bip-0044.mediawiki|BIP44 - Multi-Account Hierarchy for Deterministic Wallets]]
+* [[bip-0049.mediawiki|BIP49 - Derivation scheme for P2WPKH-nested-in-P2SH based accounts]]
+* [[bip-0141.mediawiki|BIP141 - Segregated Witness (Consensus layer)]]
+* [[bip-0173.mediawiki|BIP173 - Base32 address format for native v0-16 witness outputs]]
diff --git a/bip-0098.mediawiki b/bip-0098.mediawiki
new file mode 100644
index 0000000..6bdf784
--- /dev/null
+++ b/bip-0098.mediawiki
@@ -0,0 +1,308 @@
+<pre>
+ BIP: 98
+ Layer: Consensus (soft fork)
+ Title: Fast Merkle Trees
+ Author: Mark Friedenbach <mark@friedenbach.org>
+ Kalle Alm <kalle.alm@gmail.com>
+ BtcDrak <btcdrak@gmail.com>
+ Comments-Summary: No comments yet.
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0098
+ Status: Draft
+ Type: Standards Track
+ Created: 2017-08-24
+ License: CC-BY-SA-4.0
+ License-Code: MIT
+</pre>
+
+==Abstract==
+
+In many applications it is useful to prove membership of a data element in a set without having to reveal the entire contents of that set.
+The Merkle hash-tree, where inner/non-leaf nodes are labeled with the hash of the labels or values of its children, is a cryptographic tool that achieves this goal.
+Bitcoin uses a Merkle hash-tree construct for committing the transactions of a block into the block header.
+This particular design, created by Satoshi, suffers from a serious flaw related to duplicate entries documented in the National Vulnerability Database as CVE-2012-2459[1], and also suffers from less than optimal performance due to unnecessary double-hashing.
+
+This Bitcoin Improvement Proposal describes a more efficient Merkle hash-tree construct that is not vulnerable to CVE-2012-2459
+and achieves an approximate 55% decrease in hash-tree construction and validation times as compared with fully optimized implementations of the Satoshi Merkle hash-tree construct.
+
+==Copyright==
+
+This BIP is licensed under a Creative Commons Attribution-ShareAlike license. All provided source code is licensed under the MIT license.
+
+==Motivation==
+
+A Merkle hash-tree is a directed acyclic graph data structure where all non-terminal nodes are labeled with the hash of combined labels or values of the node(s) it is connected to.
+Bitcoin uses a unique Merkle hash-tree construct invented by Satoshi for calculating the block header commitment to the list of transactions in a block.
+While it would be convenient for new applications to make use of this same data structure so as to share implementation and maintenance costs, there are three principle drawbacks to reuse.
+
+First, Satoshi's Merkle hash-tree has a serious vulnerability[1] related to duplicate tree entries that can cause bugs in protocols that use it.
+While it is possible to secure protocols and implementations against exploit of this flaw, it requires foresight and it is a bit more tricky to design secure protocols that work around this vulnerability.
+Designers of new protocols ought avoid using the Satoshi Merkle hash-tree construct where at all possible in order to responsibly decrease the likelihood of downstream bugs in naïve implementations.
+
+Second, Satoshi's Merkle hash-tree performs an unnecessary number of cryptographic hash function compression rounds, resulting in construction and validation times that are approximately three (3) times more computation than is strictly necessary in a naïve implementation, or 2.32x more computation in an implementation specialized for this purpose only[2].
+New implementations that do not require backwards compatibility ought to consider hash-tree implementations that do not carry this unnecessary performance hit.
+
+Third, Satoshi's algorithm presumes construction of a tree index from an ordered list, and therefore is designed to support balanced trees with a uniform path length from root to leaf for all elements in the tree.
+Many applications, on the other hand, benefit from having unbalanced trees, particularly if the shorter path is more likely to be used.
+While it is possible to make a few elements of a Satoshi hash-tree have shorter paths than the others, the tricks for doing so are dependent on the size of the tree and not very flexible.
+
+Together these three reasons provide justification for specifying a standard Merkle hash-tree structure for use in new protocols that fixes these issues.
+This BIP describes such a structure, and provides an example implementation.
+
+==Specification==
+
+A Merkle hash-tree as defined by this BIP is an arbitrarily-balanced binary tree whose terminal/leaf nodes are labelled with the double-SHA256 hashes of data, whose format is outside the scope of this BIP, and inner nodes with labels constructed from the fast-SHA256 hash of its children's labels.
+The following image depicts an example unbalanced hash-tree:
+
+:: [[File:bip-0098/unbalanced-hash-tree.png]]
+
+'''A''', '''B''', and '''C''' are leaf labels, 32-byte double-SHA256 hashes of the data associated with the leaf.
+'''Node''' and '''Root''' are inner nodes, whose labels are fast-SHA256 (defined below) hashes of their respective children's labels.
+'''Node''' is labelled with the fast-SHA256 hash of the concatination of '''B''' and '''C'''.
+'''Root''' is labelled with the fast-SHA256 hash of the concatination of '''A''' and '''Node''', and is the ''Merkle root'' of the tree.
+Nodes with single children are not allowed.
+
+The ''double-SHA256'' cryptographic hash function takes an arbitrary-length data as input and produces a 32-byte hash by running the data through the SHA-256 hash function as specified in FIPS 180-4[3], and then running the same hash function again on the 32-byte result, as a protection against length-extension attacks.
+
+The ''fast-SHA256'' cryptographic hash function takes two 32-byte hash values, concatenates these to produce a 64-byte buffer, and applies a single run of the SHA-256 hash function with a custom 'initialization vector' (IV) and without message paddding.
+The result is a 32-byte 'midstate' which is the combined hash value and the label of the inner node.
+The changed IV protects against path-length extension attacks (grinding to interpret a hash as both an inner node and a leaf).
+fast-SHA256 is only defined for two 32-byte inputs.
+The custom IV is the intermediate hash value generated after performing a standard SHA-256 of the following hex-encoded bytes and extracting the midstate:
+
+ cbbb9d5dc1059ed8 e7730eaff25e24a3 f367f2fc266a0373 fe7a4d34486d08ae
+ d41670a136851f32 663914b66b4b3c23 1b9e3d7740a60887 63c11d86d446cb1c
+
+This data is the first 512 fractional bits of the square root of 23, the 9th prime number.
+The resulting midstate is used as IV for the fast-SHA256 cryptographic hash function:
+
+ static unsigned char _MidstateIV[32] =
+ { 0x89, 0xcc, 0x59, 0xc6, 0xf7, 0xce, 0x43, 0xfc,
+ 0xf6, 0x12, 0x67, 0x0e, 0x78, 0xe9, 0x36, 0x2e,
+ 0x76, 0x8f, 0xd2, 0xc9, 0x18, 0xbd, 0x42, 0xed,
+ 0x0e, 0x0b, 0x9f, 0x79, 0xee, 0xf6, 0x8a, 0x24 };
+
+As fast-SHA256 is only defined for two (2) 32-byte hash inputs, there are necessarily two special cases:
+an empty Merkle tree is not allowed, nor is any root hash defined for such a "tree";
+and a Merkle tree with a single value has a root hash label equal to that self-same value of the leaf branch, the only node in the tree (a passthrough operation with no hashing).
+
+===Rationale===
+
+The fast-SHA256 hash function can be calculated 2.32x faster than a specialized double-SHA256 implementation[2], or three (3) times faster than an implementation applying a generic SHA-256 primitive twice,
+as hashing 64 bytes of data with SHA-256 as specified by FIPS 180-4[3] takes two compression runs (because of message padding) and then a third compression run for the double-SHA256 construction.
+Validating a fast-SHA256 Merkle root is therefore more than twice as fast as the double-SHA256 construction used by Satoshi in bitcoin.
+Furthermore the fastest fast-SHA256 implementation ''is'' the generic SHA-256 implementation, enabling generic circuitry and code reuse without a cost to performance.
+
+The application of fast-SHA256 to inner node label updates is safe in this limited domain because the inputs are hash values and fixed in number and in length,
+so the sorts of attacks prevented by message padding and double-hashing do not apply.
+
+The 'initialization vector' for fast-SHA256 is changed in order to prevent a category of attacks on higher level protocols where a partial collision can serve as both a leaf hash and as an inner node commitment to another leaf hash.
+The IV is computed using standard SHA-256 plus midstate extraction so as to preserve compatability with cryptographic library interfaces that do not support custom IVs, at the cost of a 2x performance hit if neither custom IVs nor resuming from midstate are supported.
+The data hashed is a nothing-up-my-sleeve number that is unlikely to have a known hash preimage.
+The prime 23 was chosen as the leading fractional bits of the first eight (8) primes, two (2) through nineteen (19), are constants used in the setup of SHA-256 itself.
+Using the next prime in sequence reduces the likelihood of introducing weakness due to reuse of a constant factor.
+
+The Merkle root hash of a single element tree is a simple pass-through of the leaf hash without modification so as to allow for chained validation of split proofs.
+This is particularly useful when the validation environment constrains proof sizes, such as push limits in Bitcoin script.
+Chained validation allows a verifier to split one proof into two or more, where the leaf is shown to be under an inner node, and that inner node is shown to be under the root.
+Without pass-through hashing in a single-element tree, use of chained validation would unnecessarily introduce a minimum path length requirement equal to the number of chain links.
+Pass-through hashing of single elements allows instead for one or more of the chained validations to use a "NOP" proof consisting of a zero-length path,
+thereby allowing, for example, a fixed series of four (4) chained validations to verify a length three (3) or shorter path.
+
+==Inclusion Proofs==
+
+An important use of Merkle hash-trees is the ability to compactly prove membership with log-sized proofs.
+This section specifies a standard encoding for a multi-element inclusion proof.
+
+To prove that a set of hashes is contained within a Merkle tree with a given root requires four pieces of information:
+
+# The root hash of the Merkle tree;
+# The hash values to be verified, a set usually consisting of the double-SHA256 hash of data elements, but potentially the labels of inner nodes instead, or both;
+# The paths from the root to the nodes containing the values under consideration, expressed as a serialized binary tree structure; and
+# The hash values of branches not taken along those paths.
+
+Typically the last two elements, the paths and the elided branch hashes, are lumped together and referred to as the ''proof''.
+
+Serialization begins with a variable-length integer (VarInt) used to encode N, the number of internal nodes in the proof.
+Next the structure of the tree is traversed using depth-first, left-to-right, pre-order algorithm to visit each internal nodes, which are serialized using a packed 3-bit representation for the configuration of each node, consuming <code>(3*N + 7) / 8</code> bytes.
+Then the number skipped hashes (those included in the proof, not verified by the proof) is serialized as a variable-length integer (VarInt),
+followed by the hashes themselves in the order previously traversed.
+
+There are eight possible configurations of internal nodes, as given in the following diagram:
+
+:: [[File:bip-0098/node-variants.png]]
+
+In this diagram, DESCEND means the branch links to another internal node, as indicated by the its child graph elements labeled "...";
+SKIP means the branch contains a hash of an elided subtree or element, and the fast-SHA256 root hash of this subtree or double-SHA256 hash of the element is included in the proof structure; and
+VERIFY means the branch contains an externally provided hash that is needed as witness for the verification of the proof.
+In tabular form, these code values are:
+
+{| class="wikitable"
+|-
+| scope="col"| Code
+| scope="col"| Left
+| scope="col"| Right
+|-
+| scope="row"| 000
+| VERIFY
+| SKIP
+|-
+| scope="row"| 001
+| VERIFY
+| VERIFY
+|-
+| scope="row"| 010
+| VERIFY
+| DESCEND
+|-
+| scope="row"| 011
+| DESCEND
+| SKIP
+|-
+| scope="row"| 100
+| DESCEND
+| VERIFY
+|-
+| scope="row"| 101
+| DESCEND
+| DESCEND
+|-
+| scope="row"| 110
+| SKIP
+| VERIFY
+|-
+| scope="row"| 111
+| SKIP
+| DESCEND
+|}
+
+These 3-bit codes are packed into a byte array such that eight (8) codes would fit in every three (3) bytes.
+The order of filling a byte begins with the most significant bit <code>0x80</code> and ends with the lest significant bit <code>0x01</code>.
+Unless the number of inner nodes is a multiple of eight (8), there will be excess low-order bits in the final byte of serialization.
+These excess bits must be zero.
+
+Note that the tree serialization is self-segmenting.
+By tracking tree structure a proof reader will know when the parser has reached the last internal node.
+The number of inner nodes serialized in the proof MUST equal the number of nodes inferred from the tree structure itself.
+Similarly, the number of SKIP hashes can also be inferred from the tree structure as serialized, and MUST equal the number of hashes provided within the proof.
+
+The single-hash proof has N=0 (the number of inner nodes),
+the tree structure is not serialized (as there are no inner nodes),
+and the number of SKIP hashes can be either 0 or 1.
+
+===Example===
+
+Consider the following Merkle tree structure:
+
+:: [[File:bip-0098/traversal-example.png]]
+
+There are six (6) internal nodes.
+The depth-first, left-to-right, pre-order traversal of the tree visits these nodes in the following order: A, B, D, F, C, then E.
+There are three (3) skipped hashes, visited in the following order: 0x00..., 0x66..., and 0x22...
+The remaining four (4) hashes are provided at runtime to be verified by the proof.
+
+{|
+| scope="col"|
+| scope="col"| Byte 1
+| scope="col"| Byte 2
+| scope="col"| Byte 3
+|-
+| scope="row"| Bits
+| 76543210
+| 76543210
+| 76543210
+|-
+| scope="row"| Nodes
+| AAABBBDD
+| DFFFCCCE
+| EE------
+|-
+| scope="row"| Code
+| 10111101
+| 10000100
+| 01000000
+|}
+
+The serialization begins with the VarInt encoded number of inner nodes, <code>0x06</code>, followed by the tree serialization itself, <code>0xbd8440</code>.
+Next the number of SKIP hashes is VarInt encoded, <code>0x03</code>, followed by the three (3) hashes in sequence.
+The resulting 101 byte proof, encoded in base64:.
+
+ Br2EQAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGZmZmZmZmZmZmZmZmZmZmZmZmZm
+ ZmZmZmZmZmZmZmZmREREREREREREREREREREREREREREREREREREREREREQ=
+
+===Rationale===
+
+The 3-bit encoding for inner nodes allows encoding all relevant configurations of the nodes where the left and right branches can each be one of {DESCEND, SKIP, VERIFY}.
+The excluded 9th possibility would have both branches as SKIP:
+
+:: [[File:bip-0098/skip-skip.png]]
+
+This possibility is not allowed as for verification purposes it is entirely equivalent to the shorter proof where the branch to that node was SKIP'ed.
+Disallowing a node with two SKIP branches eliminates what would otherwise be a source of proof malleability.
+
+The number of hashing operations required to verify a proof is one less than the number of hashes (SKIP and VERIFY combined),
+and is exactly equal to the number of inner nodes serialized as the beginning of the proof as N.
+The variable-length integer encoding has the property that serialized integers, sorted lexigraphically, will also be sorted numerically.
+Since the first serialized item is the number of inner nodes, sorting proofs lexigraphically has the effect of sorting the proofs by the amount of work required to verify.
+
+The number of hashes required as input for verification of a proof is N+1 minus the number of SKIP hashses,
+and can be quickly calculated without parsing the tree structure.
+
+The coding and packing rules for the serialized tree structure were also chosen to make lexigraphical comparison useful (or at least not meaningless).
+If we consider a fully-expanded tree (no SKIP hashes, all VERIFY) to be encoding a list of elements in the order traversed depth-first from left-to-right,
+then we can extract proofs for subsets of the list by SKIP'ing the hashes of missing values and recursively pruning any resulting SKIP,SKIP nodes.
+Lexigraphically comparing the resulting serialized tree structures is the same as lexigraphically comparing lists of indices from the original list verified by the derived proof.
+
+Because the number of inner nodes and the number of SKIP hashes is extractible from the tree structure,
+both variable-length integers in the proof are redundant and could have been omitted.
+However that would require either construction and storage of the explicit tree in memory at deserialization time,
+or duplication of the relatively complicated tree parsing code in both the serialization and verification methods.
+For that reason (as well as to handle the single-hash edge case) the redundant inner node and SKIP hash counts are made explicit in the serialization,
+and the two values must match what is inferred from the tree structure for a proof to be valid.
+This makes deserialization trivial and defers tree construction until verification time,
+which has the additional benefit of enabling log-space verification algorithms.
+
+==Fast Merkle Lists==
+
+Many applications use a Merkle tree to provide indexing of, or compact membership proofs about the elements in a list.
+This ammendum specifies an algorithm that constructs a canonical balanced tree structure for lists of various lengths.
+It differs in a subtle but important way from the algorithm used by Satoshi so as to structurally prevent the vulnerability described in [1].
+
+# Begin with a list of arbitrary data strings.
+# Pre-process the list by replacing each element with its double-SHA256 hash.
+# If the list is empty, return the zero hash.
+# While the list has 2 or more elements,
+#* Pass through the list combining adjacent entries with the fast-SHA256 hash. If the list has an odd number of elements, leave the last element as-is (this fixes [1]). This step reduces a list of N elements to ceil(N/2) entries.
+# The last remaining item in the list is the Merkle root.
+
+This algorithm differs from Merkle lists used in bitcoin in two ways.
+First, fast-SHA256 is used instead of double-SHA256 for inner node labels.
+Second, final entries on an odd-length list are not duplicated and hashed, which is the mistake that led to CVE-2012-2459[1].
+
+==Implementation==
+
+An implementation of this BIP for extraction of Merkle branches and fast, log-space Merkle branch validation is available at the following Github repository:
+
+[https://github.com/maaku/bitcoin/tree/fast-merkle-tree]
+
+Also included in this repo is a 'merklebranch' RPC for calculating root values and extracting inclusion proofs for both arbitrary trees and trees constructed from lists of values using the algorithm in this BIP,
+and a 'mergemerklebranch' RPC for unifying two or more fast Merkle tree inclusion proofs--replacing SKIP hashes in one proof with a subtree extracted from another.
+
+==Deployment==
+
+This BIP is used by BIP116 (MERKLEBRANCHVERIFY)[4] to add Merkle inclusion proof verification to script by means of a soft-fork NOP expansion opcode.
+Deployment of MERKLEBRANCHVERIFY would make the contents of this BIP consensus critical.
+The deployment plan for BIP116 is covered in the text of that BIP.
+
+==Compatibility==
+
+This BIP on its own does not cause any backwards incompatibility.
+
+==References==
+
+[1] [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-2459 National Vulnerability Database: CVE-2012-2459]
+
+[2] [https://github.com/sipa/bitcoin/tree/201709_dsha256_64 github.com:sipa/bitcoin 201709_dsha256_64] Pieter Wuille, September 2017, personal communication. By making use of knowledge that the inputs at each stage are fixed length, Mr. Wuille was able to achieve a 22.7% reduction in the time it takes to compute the double-SHA256 hash of 64 bytes of data, the hash aggregation function of the Satoshi Merkle tree construction.
+
+[3] [http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf Secure Hash Standard]
+
+[4] [https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki BIP 116 MERKLEBRANCHVERIFY]
diff --git a/bip-0098/build.sh b/bip-0098/build.sh
new file mode 100755
index 0000000..a8a3155
--- /dev/null
+++ b/bip-0098/build.sh
@@ -0,0 +1,6 @@
+#!/bin/sh
+
+dot -Tpng -o node-variants.png node-variants.dot
+dot -Tpng -o skip-skip.png skip-skip.dot
+dot -Tpng -o traversal-example.png traversal-example.dot
+dot -Tpng -o unbalanced-hash-tree.png unbalanced-hash-tree.dot
diff --git a/bip-0098/node-variants.dot b/bip-0098/node-variants.dot
new file mode 100644
index 0000000..7171346
--- /dev/null
+++ b/bip-0098/node-variants.dot
@@ -0,0 +1,85 @@
+digraph G {
+ row1 [shape=none, label=""]
+
+ A [label="000"]
+ A -> Al [label="L"]
+ Al [label="VERIFY"]
+ A -> Ar [label="R"]
+ Ar [label="SKIP"]
+
+ B [label="001"]
+ B -> Bl [label="L"]
+ Bl [label="VERIFY"]
+ B -> Br [label="R"]
+ Br [label="VERIFY"]
+
+ { rank = same; row1; A; B; }
+
+ C [label="010"]
+ C -> Cl [label="L"]
+ Cl [label="VERIFY"]
+ C -> Cr [label="R"]
+ Cr [label="DESCEND"]
+ Cr -> Crl
+ Crl [label="..."]
+ Cr -> Crr
+ Crr [label="..."]
+
+ D [label="011"]
+ D -> Dl [label="L"]
+ Dl [label="DESCEND"]
+ Dl -> Dll
+ Dll [label="..."]
+ Dl -> Dlr
+ Dlr [label="..."]
+ D -> Dr [label="R"]
+ Dr [label="SKIP"]
+
+ E [label="100"]
+ E -> El [label="L"]
+ El [label="DESCEND"]
+ El -> Ell
+ Ell [label="..."]
+ El -> Elr
+ Elr [label="..."]
+ E -> Er [label="R"]
+ Er [label="VERIFY"]
+
+ row1 -> invis [style=invis]
+ invis [shape=none, label=""]
+ invis -> C [style=invis]
+ { rank = same; C; D; E; }
+
+ F [label="101"]
+ F -> Fl [label="L"]
+ Fl [label="DESCEND"]
+ Fl -> Fll
+ Fll [label="..."]
+ Fl -> Flr
+ Flr [label="..."]
+ F -> Fr [label="R"]
+ Fr [label="DESCEND"]
+ Fr -> Frl
+ Frl [label="..."]
+ Fr -> Frr
+ Frr [label="..."]
+
+ G [label="110"]
+ G -> Gl [label="L"]
+ Gl [label="SKIP"]
+ G -> Gr [label="R"]
+ Gr [label="VERIFY"]
+
+ H [label="111"]
+ H -> Hl [label="L"]
+ Hl [label="SKIP"]
+ H -> Hr [label="R"]
+ Hr [label="DESCEND"]
+ Hr -> Hrl
+ Hrl [label="..."]
+ Hr -> Hrr
+ Hrr [label="..."]
+
+ Crl -> F [style=invis]
+ { rank = same; F; G; H; }
+}
diff --git a/bip-0098/node-variants.png b/bip-0098/node-variants.png
new file mode 100644
index 0000000..991d7bc
--- /dev/null
+++ b/bip-0098/node-variants.png
Binary files differ
diff --git a/bip-0098/skip-skip.dot b/bip-0098/skip-skip.dot
new file mode 100644
index 0000000..5e633d6
--- /dev/null
+++ b/bip-0098/skip-skip.dot
@@ -0,0 +1,7 @@
+digraph G {
+ A [label="???"]
+ A -> Al [label="L"]
+ Al [label="SKIP"]
+ A -> Ar [label="R"]
+ Ar [label="SKIP"]
+} \ No newline at end of file
diff --git a/bip-0098/skip-skip.png b/bip-0098/skip-skip.png
new file mode 100644
index 0000000..d3e7c45
--- /dev/null
+++ b/bip-0098/skip-skip.png
Binary files differ
diff --git a/bip-0098/traversal-example.dot b/bip-0098/traversal-example.dot
new file mode 100644
index 0000000..2993642
--- /dev/null
+++ b/bip-0098/traversal-example.dot
@@ -0,0 +1,32 @@
+digraph G {
+ a [label="A\n101"]
+ a -> b
+ a -> c
+
+ b [label="B\n111"]
+ b -> s0
+ s0 [label="SKIP\n0x00..."]
+ b -> d
+
+ d [label="D\n011"]
+ d -> f
+ d -> s1
+ s1 [label="SKIP\n0x22..."]
+
+ f [label="F\n000"]
+ f -> v1
+ v1 [label="VERIFY\n0x55..."]
+ f -> s2
+ s2 [label="SKIP\n0x66..."]
+
+ c [label="C\n010"]
+ c -> v2
+ v2 [label="VERIFY\n0x11..."]
+ c -> e
+
+ e [label="E\n001"]
+ e -> v3
+ v3 [label="VERIFY\n0x33..."]
+ e -> v4
+ v4 [label="VERIFY\n0x44..."]
+}
diff --git a/bip-0098/traversal-example.png b/bip-0098/traversal-example.png
new file mode 100644
index 0000000..a6a7954
--- /dev/null
+++ b/bip-0098/traversal-example.png
Binary files differ
diff --git a/bip-0098/unbalanced-hash-tree.dot b/bip-0098/unbalanced-hash-tree.dot
new file mode 100644
index 0000000..c637652
--- /dev/null
+++ b/bip-0098/unbalanced-hash-tree.dot
@@ -0,0 +1,11 @@
+digraph G {
+ 0 [label="Root\nH(A || H(B || C))"]
+ 0 -> A
+ A [label="A\nskip"]
+ 0 -> 1
+ 1 [label="Node\nH(B || C)"]
+ 1 -> B
+ B [label="B\nskip"]
+ 1 -> C
+ C [label="C\nverify"]
+}
diff --git a/bip-0098/unbalanced-hash-tree.png b/bip-0098/unbalanced-hash-tree.png
new file mode 100644
index 0000000..339bb22
--- /dev/null
+++ b/bip-0098/unbalanced-hash-tree.png
Binary files differ
diff --git a/bip-0115.mediawiki b/bip-0115.mediawiki
index 52366ab..9432f5c 100644
--- a/bip-0115.mediawiki
+++ b/bip-0115.mediawiki
@@ -83,7 +83,7 @@ Why are block heights required to be absolute, rather than relative?
Why are blocks older than 52596 deep in the chain not verified?
* This is to avoid creating an infinite storage requirement from all full nodes which would be necessary to maintain all the block headers indefinitely. 52596 block headers requires a fixed size of approximately 4 MB.
-* In any case where you might want to specify a deeper block, you can also just as well specify a more recent one that decends from it.
+* In any case where you might want to specify a deeper block, you can also just as well specify a more recent one that descends from it.
* It is assumed that 1 year is sufficient time to double-spend any common UTXOs on all blockchains of interest.
* If a deeper check is needed, it can be softforked in. Making the check more shallow, on the other hand, is a hardfork.
diff --git a/bip-0116.mediawiki b/bip-0116.mediawiki
new file mode 100644
index 0000000..7f103ec
--- /dev/null
+++ b/bip-0116.mediawiki
@@ -0,0 +1,145 @@
+<pre>
+ BIP: 116
+ Layer: Consensus (soft fork)
+ Title: MERKLEBRANCHVERIFY
+ Author: Mark Friedenbach <mark@friedenbach.org>
+ Kalle Alm <kalle.alm@gmail.com>
+ BtcDrak <btcdrak@gmail.com>
+ Comments-Summary: No comments yet.
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0116
+ Status: Draft
+ Type: Standards Track
+ Created: 2017-08-25
+ License: CC-BY-SA-4.0
+ License-Code: MIT
+</pre>
+
+==Abstract==
+
+A general approach to bitcoin contracts is to fully enumerate the possible spending conditions and then program verification of these conditions into a single script.
+At redemption, the spending condition used is explicitly selected, e.g. by pushing a value on the witness stack which cascades through a series if if/else constructs.
+
+This approach has significant downsides, such as requiring all program pathways to be visible in the scriptPubKey or redeem script, even those which are not used at validation.
+This wastes space on the block chain, restricts the size of possible scripts due to push limits, and impacts both privacy and fungibility as details of the contract can often be specific to the user.
+
+This BIP proposes a new soft-fork upgradeable opcode, MERKLEBRANCHVERIFY, which allows script writers to commit to a set of data elements and have one or more of these elements be provided at redemption without having to reveal the entire set.
+As these data elements can be used to encode policy, such as public keys or validation subscripts, the MERKLEBRANCHVERIFY opcode can be used to overcome these limitations of existing bitcoin script.
+
+==Copyright==
+
+This BIP is licensed under a Creative Commons Attribution-ShareAlike license. All provided source code is licensed under the MIT license.
+
+==Specification==
+
+MERKLEBRANCHVERIFY redefines the existing NOP4 opcode.
+When executed, if any of the following conditions are true, the script interpreter will terminate with an error:
+
+# the stack contains less than three (3) items;
+# the first item on the stack is more than 2 bytes;
+# the first item on the stack, interpreted as an integer, N, is negative or not minimally encoded;
+# the second item on the stack is not exactly 32 bytes;
+# the third item on the stack is not a serialized Merkle tree inclusion proof as specified by BIP98[1] and requiring exactly <code>floor(N/2)</code> VERIFY hashes; or
+# the remainder of the stack contains less than <code>floor(N/2)</code> additional items, together referred to as the input stack elements.
+
+If the low-order bit of N is clear, <code>N&1 == 0</code>, each input stack element is hashed using double-SHA256.
+Otherwise, each element must be exactly 32 bytes in length and are interpreted as serialized hashes.
+These are the VERIFY hashes.
+
+If the fast Merkle root computed from the Merkle tree inclusion proof, the third item on the stack,
+with the VERIFY hashes in the order as presented on the stack, from top to bottom,
+does not exactly match the second item on the stack,
+the script interpreter will terminate with an error.
+
+Otherwise, script execution will continue as if a NOP had been executed.
+
+==Motivation==
+
+Although BIP16 (Pay to Script Hash)[2] and BIP141 (Segregated Witness)[3] both allow the redeem script to be kept out of the scriptPubKey and therefore out of the UTXO set, the entire spending conditions for a coin must nevertheless be revealed when that coin is spent.
+This includes execution pathways or policy conditions which end up not being needed by the redemption.
+Not only is it inefficient to require this unnecessary information to be present on the blockchain, albeit in the witness, it also impacts privacy and fungibility as some unused script policies may be identifying.
+Using a Merkle hash tree to commit to the policy options, and then only forcing revelation of the policy used at redemption minimizes this information leakage.
+
+Using Merkle hash trees to commit to policy allows for considerably more complex contracts than would would otherwise be possible, due to various built-in script size and runtime limitations.
+With Merkle commitments to policy these size and runtime limitations constrain the complexity of any one policy that can be used rather than the sum of all possible policies.
+
+==Rationale==
+
+The MERKLEBRANCHVERIFY opcode uses fast Merkle hash trees as specified by BIP98[1] rather than the construct used by Satoshi for committing transactions to the block header as the later has a known vulnerability relating to duplicate entries that introduces a source of malleability to downstream protocols[4].
+A source of malleability in Merkle proofs could potentially lead to spend vulnerabilities in protocols that use MERKLEBRANCHVERIFY.
+For example, a compact 2-of-N policy could be written by using MERKLEBRANCHVERIFY to prove that two keys are extracted from the same tree, one at a time, then checking the proofs for bitwise equality to make sure the same entry wasn't used twice.
+With the vulnerable Merkle tree implementation there are privledged positions in unbalanced Merkle trees that allow multiple proofs to be constructed for the same, single entry.
+
+BIP141 (Segregated Witness)[3] provides support for a powerful form of scirpt upgrades called script versioning, which is able to achieve the sort of upgrades which would previously have been hard-forks.
+If script versioning were used for deployment then MERKLEBRANCHVERIFY could be written to consume its inputs, which would provide a small 2-byte savings for many anticipated use cases.
+However the more familiar NOP-expansion soft-fork mechanism used by BIP65 (CHECKLOCKTIMEVERIFY)[5] and BIP112 (CHECKSEQUENCEVERIFY)[6] was chosen over script versioning for the following two reasons:
+
+# '''Infrastructure compatibility.''' Using soft-fork NOP extensions allows MERKLEBRANCHVERIFY to be used by any existing software able to consume custom scripts, and results in standard P2SH or P2WSH-nested-in-P2SH addresses without the need for BIP143[7] signing code. This allows MERKLEBRANCHVERIFY to be used immediately by services that need it rather than wait on support for script versioning and/or BIP-143[7] signatures in tools and libraries.
+# '''Delayed decision on script upgrade protocol.''' There are unresolved issues with respect to how script versioning should be used for future script upgrades. There are only 16 available script versions reserved for future use, and so they should be treated as a scarce resource. Additionally, script feature versioning should arguably be specified in the witness and the BIP141 script versioning only be used to specify the structure of the witness, however no such protocol exists as of yet. Using the NOP-expansion space prevents MERKLEBRANCHVERIFY from being stalled due to waiting on script upgrade procedure to be worked out, while making use of expansion space that is already available.
+
+The MERKLEBRANCHVERIFY opcode allows for VERIFY hashes to be presented directly, or calculated from the leaf values using double-SHA256.
+In most cases the latter approach is expected to be used so that the leaf value(s) can be used for both branch validation and other purposes without any explicit preprocessing.
+However allowing already-calculated hash values as inputs enables using chained MERKLEBRANCHVERIFY opcodes to verify branches of trees with proofs large enough that they would not fit in the 520 byte script push limitation.
+As specified, a 30-branch path can be verified by proving the path from the leaf to the 15th interior node as the 'root', then proving that node's hash to be a child of the actual Merkle tree root hash.
+Validation of a 256-branch path (e.g. a binary prefix tree with a hash value as key) would require 18 chained validations, which would fit within current script limitations.
+
+==Applications==
+
+===1-of-N for large N===
+
+Here is a redeem script that allows a coin to be spent by any key from a large set, without linear scaling in script size:
+
+ redeemScript: <root> 2 MERKLEBRANCHVERIFY 2DROP DROP CHECKSIG
+ witness: <sig> <pubkey> <proof>
+
+The redeem script looks very similar to the standard pay-to-pubkey-hash, except instead of showing that the pubkey's hash is the same as the commitment given, we demonstrate that the pubkey is one of potentially many pubkeys included in the Merkle tree committed to in the redeem script.
+The low-order bit of the first parameter, 2, is clear, meaning that there is one input (<code>(2>>1) == 1</code>), the serialized pubkey, and its VERIFY hash needs to be calculated by MERKLEBRANCHVERIFY using double-SHA256.
+
+===Honeypots===
+
+As described by Pieter Wuille[8] the 1-of-N scheme is particularly useful for constructing honeypots.
+The desire is to put a large bounty on a server, larger than the value of the server itself so that if the server is compromised it is highly likely that the hacker will claim the bitcoin, thereby revealing the intrusion.
+However if there are many servers, e.g. 1,000, it becomes excessively expensive to lock up separate bounties for each server.
+It would be desireable if the same bounty was shared across multiple servers in such a way that the spend would reveal which server was compromised.
+
+This is accomplished by generating 1,000 different keys, building a hash tree of these public keys, and placing each key and associated Merkle path on separate servers.
+When the honeypot is claimed, the (previous) owner of the coins can tell which server was compromised from the key and path used to claim the funds.
+
+==Implementation==
+
+An implementation of this BIP, including both consensus code updates and tests is available at the following Github repository:
+
+[https://github.com/maaku/bitcoin/tree/merkle-branch-verify]
+
+==Deployment==
+
+This BIP will be deployed by BIP8 (Version bits with lock-in by height)[9] with the name "merklebranchverify" and using bit 2.
+
+For Bitcoin mainnet, the BIP8 startheight will be at height M to be determined and BIP8 timeout activation will occur on height M + 50,400 blocks.
+
+For Bitcoin testnet, the BIP8 startheight will be at height T to be determined and BIP8 timeout activation will occur on height T + 50,400 blocks.
+
+We note that DISCOURAGE_UPGRADABLE_NOPS means that transactions which use this feature are already considered non-standard by the rules of the network, making deployment easier than was the case with, for example, with BIP68 (Relative lock-time using consensus-enforced sequence numbers)[9].
+
+==Compatibility==
+
+Old clients will consider the OP_MERKLEBRANCHVERIFY as a NOP and ignore it. Proof will not be verified, but the transaction will be accepted.
+
+==References==
+
+[1] [https://github.com/bitcoin/bips/blob/master/bip-0098.mediawiki BIP98: Fast Merkle Trees (Consensus layer)]
+
+[2] [https://github.com/bitcoin/bips/blob/master/bip-0016.mediawiki BIP16: Pay to Script Hash]
+
+[3] [https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki BIP141: Segregated Witness (Consensus layer)]
+
+[4] [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-2459 National Vulnerability Database: CVE-2012-2459]
+
+[5] [https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki BIP65: OP_CHECKLOCKTIMEVERIFY]
+
+[6] [https://github.com/bitcoin/bips/blob/master/bip-0112.mediawiki BIP112: CHECKSEQUENCEVERIFY]
+
+[7] [https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki BIP143: Transaction Signature Verification for Version 0 Witness Program]
+
+[8] [https://blockstream.com/2015/08/24/treesignatures.html Multisig on steroids using tree signatures]
+
+[9] [https://github.com/bitcoin/bips/blob/master/bip-0068.mediawiki BIP68: Relative lock-time using consensus-enforced sequence numbers]
diff --git a/bip-0117.mediawiki b/bip-0117.mediawiki
new file mode 100644
index 0000000..f4d1b4a
--- /dev/null
+++ b/bip-0117.mediawiki
@@ -0,0 +1,196 @@
+<pre>
+ BIP: 117
+ Layer: Consensus (soft fork)
+ Title: Tail Call Execution Semantics
+ Author: Mark Friedenbach <mark@friedenbach.org>
+ Kalle Alm <kalle.alm@gmail.com>
+ BtcDrak <btcdrak@gmail.com>
+ Comments-Summary: No comments yet.
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0117
+ Status: Draft
+ Type: Standards Track
+ Created: 2017-08-25
+ License: CC-BY-SA-4.0
+ License-Code: MIT
+</pre>
+
+==Abstract==
+
+BIP16 (Pay to Script Hash)[1] and BIP141 (Segregated Witness)[2] provide mechanisms by which script policy can be revealed at spend time as part of the execution witness.
+In both cases only a single script can be committed to by the construct.
+While useful for achieving the goals of these proposals, they still require that all policies be specified within the confine of a single script, regardless of whether the policies are needed at the time of spend.
+
+This BIP, in conjunction with BIP116 (MERKLEBRANCHVERIFY)[3] allows for a script to commit to a practically unbounded number of code pathways, and then reveal the actual code pathway used at spend time.
+This achieves a form of generalized MAST[4] enabling decomposition of complex branching scripts into a set of non-branching flat execution pathways, committing to the entire set of possible pathways, and then revealing only the path used at spend time.
+
+==Copyright==
+
+This BIP is licensed under a Creative Commons Attribution-ShareAlike license. All provided source code is licensed under the MIT license.
+
+==Specification==
+
+If, at the end of script execution:
+
+* the execution state is non-clean, meaning
+*# the main stack has more than one item on it, or
+*# the main stack has exactly one item and the alt-stack is not empty;
+* the top-most element of the main stack evaluates as true when interpreted as a bool; and
+* the top-most element is not a single byte or is outside the inclusive range of <code>0x51</code> to <code>0x60</code>,
+
+then that top-most element of the main stack is popped and interpreted as a serialized script and executed,
+while the remaining elements of both stacks remain in place as inputs.
+
+If the above conditions hold except for the last one, such that:
+
+* the top-most element ''is'' a single byte within the inclusive range of <code>0x51</code> (<code>OP_1</code>, meaning N=2) to <code>0x60</code> (<code>OP_16</code>, meaning N=17); and
+* other than this top-most element there are at least N additional elements on the main stack and alt stack combined,
+
+then the top-most element of the main stack is dropped,
+and the N=2 (<code>0x51</code>) to 17 (<code>0x60</code>) further elements are popped from the main stack,
+continuing from the alt stack if the main stack is exhausted,
+and concatinated together in reverse order to form a serialized script,
+which is then executed with the remaining elements of both stacks remaining in place as inputs.
+
+The presence of CHECKSIG or CHECKMULTISIG within the subscript do not count towards the global MAX_BLOCK_SIGOPS_COST limit,
+and the number of non-push opcodes executed in the subscript is not limited by MAX_OPS_PER_SCRIPT.
+Execution state, other than the above exceptions, carries over into the subscript,
+and termination of the subscript terminates execution of the script as a whole.
+This is known as execution with tail-call semantics.
+
+Only one such tail-call of a subscript is allowed per script execution context, and only from within a segwit redeem script.
+Alternatively stated, neither evaluation of witness stack nor execution of the scriptPubKey or scriptSig or P2SH redeem script results in tail-call semantics.
+
+==Motivation==
+
+BIP16 (Pay to Script Hash)[1] and BIP141 (Segregated Witness)[2] allow delayed revelation of a script's policy until the time of spend.
+However these approaches are limited in that only a single policy can be committed to in a given transaction output.
+It is not possible to commit to multiple policies and then choose, at spend time, which to reveal.
+
+BIP116 (MERKLEBRANCHVERIFY)[3] allows multiple data elements to be committed to while only revealing those necessary at the time of spend.
+The MERKLEBRANCHVERIFY opcode is only able to provide commitments to a preselected set of data values, and does not by itself allow for executing code.
+
+This BIP generalizes the approach of these prior methods by allowing the redeem script to perform any type of computation necessary to place the policy script on the stack.
+The policy script is then executed from the top of the data stack in a way similar to how BIP16 and BIP141 enable redeem scripts to be executed from the top of the witness stack.
+In particular, using MERKLEBRANCHVERIFY[3] in the scriptPubKey or redeem script allows selection of the policy script that contains only the necessary conditions for validation of the spend.
+This is a form of generalized MAST[4] where a stage of precomputation splits a syntax tree into possible execution pathways, which are then enumerated and hashed into a Merkle tree of policy scripts.
+At spend time membership in this tree of the provided policy script is proven before execution recurses into the policy script.
+
+==Rationale==
+
+This proposal is a soft-fork change to bitcoin's consensus rules because leaving a script that data-wise evaluates as true from its serialized form on the stack as execution terminates would result in the script validation returning true anyway.
+Giving the subscript a chance to terminate execution is only further constraining the validation rules.
+The only scripts which would evaluate as false are the empty script, or a script that does nothing more than push empty/zero values to the stack.
+None of these scripts have any real-world utility, so excluding them to achieve soft-fork compatibility doesn't come with any downsides.
+
+By restricting ourselves to tail-call evaluation instead of a more general EVAL opcode we greatly simplify the implementation.
+Tail-call semantics means that execution never returns to the calling script's context, and therefore no state needs to be saved or later restored.
+The implementation is truly as simple as pulling the subscript off the stack, resetting a few state variables, and performing a jump back to the beginning of the script interpreter.
+
+The restriction to allow only one layer of tail-call recursion is admittedly limiting, however the technical challenges to supporting multi-layer tail-call recursion are significant.
+A new metric would have to be developed to track script resource usage, for which transaction data witness size are only two factors.
+This new weight would have to be relayed with transactions, used as the basis for fee calculation, validated in-line with transaction execution, and policy decided upon for DoS-banning peers that propagate violating transactions.
+
+However should these problems be overcome, dropping the single recursion constraint is itself a soft-fork for the same reason, applied inductively.
+Allowing only one layer of tail-call recursion allows us to receive the primary benefit of multi-policy commitments / generalized MAST,
+while leaving the door open to future generalized tail-call recursion if and when the necessary changes are made to resource accounting and p2p transaction distribution.
+
+The global SIGOP limit and per-script opcode limits do not apply to the policy script
+because dynamic selection of the policy script makes it not possible for static analysis tools to verify these limits in general,
+and because performance improvements to libsecp256k1 and Bitcoin Core have made these limits no longer necessary as they once were.
+The validation costs are still limited by the number of signature operations it is possible to encode within block size limits,
+and the maximum script size per input is limited to 10,000 + 17*520 = 18,840 bytes.
+
+To allow for this drop of global and per-script limits,
+tail-call evaluation cannot be allowed for direct execution of the scriptPubKey,
+as such scripts are fetched from the UTXO and do not count towards block size limits of the block being validated.
+Likewise tail-call from P2SH redeem scripts is not supported due to quadratic blow-up vulnerabilities that are fixed in segwit.
+
+==Generalized MAST==
+
+When combined with BIP116 (MERKLEBRANCHVERIFY)[3], tail-call semantics allows for generalized MAST capabilities[4].
+The script author starts with a full description of the entire contract they want to validate at the time of spend.
+The possible execution pathways through the script are then enumerated, with conditional branches replaced by a validation of the condition and the branch taken.
+The list of possible execution pathways is then put into a Merkle tree, with the flattened policy scripts as the leaves of this tree.
+The final redeem script which funds are sent to is as follows:
+
+ redeemScript: <nowiki><root> 2 MERKLEBRANCHVERIFY 2DROP DROP</nowiki>
+ witness: <nowiki><argN> ... <arg1> <policyScript> <proof></nowiki>
+
+Where <code>policyScript</code> is the flattened execution pathway, <code>proof</code> is the serialized Merkle branch and path that proves the policyScript is drawn from the set used to construct the Merkle tree <code>root</code>, and <code>arg1</code> through <code>argN</code> are the arguments required by <code>policyScript</code>.
+The <code>2</code> indicates that a single leaf (<code>1 << 1</code>) follows, and the leaf value is not pre-hashed.
+The <code>2DROP DROP</code> is necessary to remove the arguments to MERKLEBRANCHVERIFY from the stack.
+
+The above example was designed for clarity, but actually violates the CLEANSTACK rule of segwit v0 script execution.
+Unless the CLEANSTACK rule is dropped or modified in a new segwit output version, this would script would have to be modified to use the alt-stack, as follows:
+
+ redeemScript: <nowiki>[TOALTSTACK]*N <root> 2 MERKLEBRANCHVERIFY 2DROP DROP</nowiki>
+ witness: <nowiki><policyScript> <proof> <arg1> ... <argN></nowiki>
+
+Where <code>[TOALTSTACK]*N</code> is the TOALTSTACK opcode repeated N times.
+This moves <code>arg1</code> through <code>argN</code> to the alt-stack in reverse order, such that <code>arg1</code> is on the top of the alt-stack when execution of <code>policyScript</code> begins.
+The <code>policyScript</code> would also have to be modified to fetch its arguments from the alt-stack, of course.
+
+If the total set of policy scripts includes scripts that take a varying number of parameters, that too can be supported, within reasonable limits.
+The following redeem script allows between 1 and 3 witness arguments in addition to the policy script and Merkle proof:
+
+ witness: <nowiki><policyScript> <proof> <arg1> ... <argN></nowiki> // N is between 1 and 3
+ redeemScript: DEPTH TOALTSTACK // Save number of witness elements to alt-stack
+ TOALTSTACK // Save 1st element (required) to alt-stack
+ DEPTH 2 SUB // Calculate number of optional elements, ignoring policyScript and proof
+ DUP IF SWAP TOALTSTACK 1SUB ENDIF // Save 2nd element (optional) to alt-stack, if it is present
+ IF TOALTSTACK ENDIF // Save 3rd element (optional) to alt-stack, if it is present; consume counter
+ <nowiki><root></nowiki> 2 MERKLEBRANCHVERIFY 2DROP DROP
+ alt-stack: <nowiki><N+2> <argN> ... <arg1></nowiki>
+
+Because the number of witness elements is pushed onto the alt-stack, this enables policy scripts to verify the number of arguments passed, even though the size of the alt-stack is not usually accessible to script.
+The following policy script for use with the above redeem script will only accept 2 witness elements on the alt-stack, preventing witness malleability:
+
+ policyScript: <nowiki>FROMALTSTACK ...check arg1... FROMALTSTACK ...check&consume arg2/arg1&2... FROMALTSTACK 4 EQUAL
+
+The number 4 is expected as that includes the <code>policyScript</code> and <code>proof</code>.
+
+The verbosity of this example can be prevented by using a uniform number of witness elements as parameters for all policy subscripts, eliminating the conditionals and stack size counts.
+Future script version upgrades should also consider relaxing CLEANSTACK rules to allow direct pass-through of arguments from the witness/redeem script to the policy script on the main stack.
+
+===Comparison with BIP114===
+
+BIP114 (Merkelized Abstract Syntax Tree)[5] specifies an explicit MAST scheme activated by BIP141 script versioning[2].
+Unlike BIP114, the scheme proposed by this BIP in conjunction with BIP116 (MERKLEBRANCHVERIFY)[3] implicitly enables MAST constructs using script itself to validate membership of the policy script in the MAST.
+This has the advantage of requiring vastly fewer consensus code changes, as well as potentially enabling future script-based innovation without requiring any further consensus code changes at all, as the MAST scheme itself is programmable.
+
+Furthermore, by adding MERKLEBRANCHVERIFY and tail-call semantics to all script using the NOP-expansion space, BIP141 style script versioning is not required.
+This removes a potentially significant hurdle to deployment by making this feature not dependent on resolving outstanding issues over address formats, how script version upgrades should be deployed, and consensus over what other features might go into a v1 upgrade.
+
+==Implementation==
+
+An implementation of this BIP, including both consensus code changes and tests are available at the following Github repository:
+
+[https://github.com/maaku/bitcoin/tree/tail-call-semantics]
+
+==Deployment==
+
+This BIP will be deployed by BIP8 (Version bits with lock-in by height)[9] with the name "tailcall" and using bit 3.
+
+For Bitcoin mainnet, the BIP8 startheight will be at height M to be determined and BIP8 timeout activation will occur on height M + 50,400 blocks.
+
+For Bitcoin testnet, the BIP8 startheight will be at height T to be determined and BIP8 timeout activation will occur on height T + 50,400 blocks.
+
+We note that CLEANSTACK means that transactions which use this feature are already considered non-standard by the rules of the network, making deployment easier than was the case with, for example, with BIP68 (Relative lock-time using consensus-enforced sequence numbers)[6].
+
+==Compatibility==
+
+The v0 segwit rules prohibit leaving anything on the stack, so for v0 parameters have to be passed on the alt stack for compatibility reasons.
+
+==References==
+
+[1] [https://github.com/bitcoin/bips/blob/master/bip-0016.mediawiki BIP16: Pay to Script Hash]
+
+[2] [https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki BIP141: Segregated Witness (Consensus Layer)]
+
+[3] [https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki BIP116: MERKLEBRANCHVERIFY]
+
+[4] "[https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015028.html An explanation and justification of the tail-call and MBV approach to MAST]", Mark Friedenbach, Bitcoin Development Mailing List, 20 September 2017.
+
+[5] [https://github.com/bitcoin/bips/blob/master/bip-0114.mediawiki BIP114: Merkelized Abstract Syntax Tree]
+
+[6] [https://github.com/bitcoin/bips/blob/master/bip-0068.mediawiki BIP68: Relative lock-time using consensus-enforced sequence numbers]
diff --git a/bip-0120.mediawiki b/bip-0120.mediawiki
index d48cdfa..b951e93 100644
--- a/bip-0120.mediawiki
+++ b/bip-0120.mediawiki
@@ -5,7 +5,7 @@
Author: Kalle Rosenbaum <kalle@rosenbaum.se>
Comments-Summary: No comments yet.
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0120
- Status: Draft
+ Status: Withdrawn
Type: Standards Track
Created: 2015-07-28
</pre>
diff --git a/bip-0121.mediawiki b/bip-0121.mediawiki
index 34820f5..1b01a0b 100644
--- a/bip-0121.mediawiki
+++ b/bip-0121.mediawiki
@@ -5,7 +5,7 @@
Author: Kalle Rosenbaum <kalle@rosenbaum.se>
Comments-Summary: No comments yet.
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0121
- Status: Draft
+ Status: Withdrawn
Type: Standards Track
Created: 2015-07-27
</pre>
diff --git a/bip-0125.mediawiki b/bip-0125.mediawiki
index a4b0279..b2e3cec 100644
--- a/bip-0125.mediawiki
+++ b/bip-0125.mediawiki
@@ -51,11 +51,11 @@ transaction) that spends one or more of the same inputs if,
# The original transactions signal replaceability explicitly or through inheritance as described in the above Summary section.
-# The replacement transaction pays an absolute higher fee than the sum paid by the original transactions.
-
# The replacement transaction does not contain any new unconfirmed inputs that did not previously appear in the mempool. (Unconfirmed inputs are inputs spending outputs from currently unconfirmed transactions.)
-# The replacement transaction must pay for its own bandwidth in addition to the amount paid by the original transactions at or above the rate set by the node's minimum relay fee setting. For example, if the minimum relay fee is 1 satoshi/byte and the replacement transaction is 500 bytes total, then the replacement must pay a fee at least 500 satoshis higher than the sum of the originals.
+# The replacement transaction pays an absolute fee of at least the sum paid by the original transactions.
+
+# The replacement transaction must also pay for its own bandwidth at or above the rate set by the node's minimum relay fee setting. For example, if the minimum relay fee is 1 satoshi/byte and the replacement transaction is 500 bytes total, then the replacement must pay a fee at least 500 satoshis higher than the sum of the originals.
# The number of original transactions to be replaced and their descendant transactions which will be evicted from the mempool must not exceed a total of 100 transactions.
diff --git a/bip-0141.mediawiki b/bip-0141.mediawiki
index 21df5bb..adcf9a9 100644
--- a/bip-0141.mediawiki
+++ b/bip-0141.mediawiki
@@ -249,7 +249,7 @@ Segregated witness fixes the problem of transaction malleability fundamentally,
Two parties, Alice and Bob, may agree to send certain amount of Bitcoin to a 2-of-2 multisig output (the "funding transaction"). Without signing the funding transaction, they may create another transaction, time-locked in the future, spending the 2-of-2 multisig output to third account(s) (the "spending transaction"). Alice and Bob will sign the spending transaction and exchange the signatures. After examining the signatures, they will sign and commit the funding transaction to the blockchain. Without further action, the spending transaction will be confirmed after the lock-time and release the funding according to the original contract. It also retains the flexibility of revoking the original contract before the lock-time, by another spending transaction with shorter lock-time, but only with mutual-agreement of both parties.
-Such setups is not possible with BIP62 as the malleability fix, since the spending transaction could not be created without both parties first signing the funding transaction. If Alice reveals the funding transaction signature before Bob does, Bob is able to lock up the funding indefinitely without ever signing the spending transaction.
+Such setups are not possible with BIP62 as the malleability fix, since the spending transaction could not be created without both parties first signing the funding transaction. If Alice reveals the funding transaction signature before Bob does, Bob is able to lock up the funding indefinitely without ever signing the spending transaction.
Unconfirmed transaction dependency chain is a fundamental building block of more sophisticated payment networks, such as duplex micropayment channel and the Lightning Network, which have the potential to greatly improve the scalability and efficiency of the Bitcoin system.
diff --git a/bip-0152.mediawiki b/bip-0152.mediawiki
index 8ea3701..e6a3969 100644
--- a/bip-0152.mediawiki
+++ b/bip-0152.mediawiki
@@ -128,7 +128,7 @@ A new inv type (MSG_CMPCT_BLOCK == 4) and several new protocol messages are adde
# Upon receipt of a cmpctblock message after sending a sendcmpct message, nodes SHOULD calculate the short transaction ID for each unconfirmed transaction they have available (ie in their mempool) and compare each to each short transaction ID in the cmpctblock message.
# After finding already-available transactions, nodes which do not have all transactions available to reconstruct the full block SHOULD request the missing transactions using a getblocktxn message.
# A node MUST NOT send a cmpctblock message unless they are able to respond to a getblocktxn message which requests every transaction in the block.
-# A node MUST NOT send a cmpctblock message without having validated that the header properly commits to each transaction in the block, and properly builds on top of the existing chain with a valid proof-of-work. A node MAY send a cmpctblock before validating that each transaction in the block validly spends existing UTXO set entries.
+# A node MUST NOT send a cmpctblock message without having validated that the header properly commits to each transaction in the block, and properly builds on top of the existing, fully-validated chain with a valid proof-of-work either as a part of the current most-work valid chain, or building directly on top of it. A node MAY send a cmpctblock before validating that each transaction in the block validly spends existing UTXO set entries.
====getblocktxn====
# The getblocktxn message is defined as a message containing a serialized BlockTransactionsRequest message and pchCommand == "getblocktxn".
diff --git a/bip-0157.mediawiki b/bip-0157.mediawiki
new file mode 100644
index 0000000..61ffc7e
--- /dev/null
+++ b/bip-0157.mediawiki
@@ -0,0 +1,471 @@
+<pre>
+ BIP: 157
+ Layer: Peer Services
+ Title: Client Side Block Filtering
+ Author: Olaoluwa Osuntokun <laolu32@gmail.com>
+ Alex Akselrod <alex@akselrod.org>
+ Jim Posen <jimpo@coinbase.com>
+ Comments-Summary: None yet
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0157
+ Status: Draft
+ Type: Standards Track
+ Created: 2017-05-24
+ License: CC0-1.0
+</pre>
+
+
+== Abstract ==
+
+This BIP describes a new light client protocol in Bitcoin that improves upon
+currently available options. The standard light client protocol in use today,
+defined in BIP
+37<ref>https://github.com/bitcoin/bips/blob/master/bip-0037.mediawiki</ref>, has
+known flaws that weaken the security and privacy of clients and allow
+denial-of-service attack vectors on full
+nodes<ref>https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012636.html</ref>.
+The new protocol overcomes these issues by allowing light clients to obtain
+compact probabilistic filters of block content from full nodes and download full
+blocks if the filter matches relevant data.
+
+New P2P messages empower light clients to securely sync the blockchain without
+relying on a trusted source. This BIP also defines a filter header, which serves
+as a commitment to all filters for previous blocks and provides the ability to
+efficiently detect malicious or faulty peers serving invalid filters. The
+resulting protocol guarantees that light clients with at least one honest peer
+are able to identify the correct block filters.
+
+== Motivation ==
+
+Bitcoin light clients allow applications to read relevant transactions from the
+blockchain without incurring the full cost of downloading and validating all
+data. Such applications seek to simultaneously minimize the trust in peers and
+the amount of bandwidth, storage space, and computation required. They achieve
+this by downloading all block headers, verifying the proofs of work, and
+following the longest proof-of-work chain. Since block headers are a fixed
+80-bytes and are generated every 10 minutes on average, the bandwidth required
+to sync the block headers is minimal. Light clients then download only the
+blockchain data relevant to them directly from peers and validate inclusion in
+the header chain. Though clients do not check the validity of all blocks in the
+longest proof-of-work chain, they rely on miner incentives for security.
+
+BIP 37 is currently the most widely used light client execution mode for
+Bitcoin. With BIP 37, a client sends a Bloom filter it wants to watch to a full
+node peer, then receives notifications for each new transaction or block that
+matches the filter. The client then requests relevant transactions from the peer
+along with Merkle proofs of inclusion in the blocks containing them, which are
+verified against the block headers. The Bloom filters match data such as client
+addresses and unspent outputs, and the filter size must be carefully tuned to
+balance the false positive rate with the amount of information leaked to peer. It
+has been shown, however, that most implementations available offer virtually
+''zero privacy'' to wallets and other
+applications<ref>https://eprint.iacr.org/2014/763.pdf</ref><ref>https://jonasnick.github.io/blog/2015/02/12/privacy-in-bitcoinj/</ref>.
+Additionally, malicious full nodes serving light clients can omit critical data
+with little risk of detection, which is unacceptable for some applications
+(such as Lightning Network clients) that must respond to certain on-chain
+events. Finally, honest nodes servicing BIP 37 light clients may incur
+significant I/O and CPU resource usage due to maliciously crafted Bloom filters,
+creating a denial-of-service (DoS) vector and disincentizing node operators from
+supporting the
+protocol<ref>https://github.com/bitcoin/bips/blob/master/bip-0111.mediawiki</ref>.
+
+The alternative detailed in this document can be seen as the opposite of BIP 37:
+instead of the client sending a filter to a full node peer, full nodes generate
+deterministic filters on block data that are served to the client. A light
+client can then download an entire block if the filter matches the data it is
+watching for. Since filters are deterministic, they only need to be constructed
+once and stored on disk, whenever a new block is connected to the chain. This
+keeps the computation required to serve filters minimal, and eliminates the I/O
+asymmetry that makes BIP 37 enabled nodes vulnerable. Clients also get better
+assurance of seeing all relevant transactions because they can check the
+validity of filters received from peers more easily than they can check
+completeness of filtered blocks. Finally, client privacy is improved because
+blocks can be downloaded from ''any source'', so that no one peer gets complete
+information on the data required by a client. Extremely privacy conscious light
+clients may opt to anonymously fetch blocks using advanced techniques such a
+Private Information
+Retrieval<ref>https://en.wikipedia.org/wiki/Private_information_retrieval</ref>.
+
+== Definitions ==
+
+<code>[]byte</code> represents a vector of bytes.
+
+<code>[N]byte</code> represents a fixed-size byte array with length N.
+
+''CompactSize'' is a compact encoding of unsigned integers used in the Bitcoin
+P2P protocol.
+
+''double-SHA256'' is a hash algorithm defined by two invocations of SHA-256:
+<code>double-SHA256(x) = SHA256(SHA256(x))</code>.
+
+The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
+"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
+interpreted as described in RFC 2119.
+
+== Specification ==
+
+=== Filter Types ===
+
+For the sake of future extensibility and reducing filter sizes, there are
+multiple ''filter types'' that determine which data is included in a block
+filter as well as the method of filter construction/querying. In this model,
+full nodes generate one filter per block per filter type supported.
+
+Each type is identified by a one byte code, and specifies the contents and
+serialization format of the filter. A full node MAY signal support for
+particular filter types using service bits. The initial filter types are defined
+separately in [[bip-0158.mediawiki|BIP 158]], and one service bit is allocated
+to signal support for them.
+
+=== Filter Headers ===
+
+This proposal draws inspiration from the headers-first mechanism that Bitcoin
+nodes use to sync the block
+chain<ref>https://bitcoin.org/en/developer-guide#headers-first</ref>. Similar to
+how block headers have a Merkle commitment to all transaction data in the block,
+we define filter headers that have commitments to the block filters. Also like
+block headers, filter headers each have a commitment to the preceding one.
+Before downloading the block filters themselves, a light client can download all
+filter headers for the current block chain and use them to verify the
+authenticity of the filters. If the filter header chains differ between multiple
+peers, the client can identify the point where they diverge, then download the
+full block and compute the correct filter, thus identifying which peer is
+faulty.
+
+The canonical hash of a block filter is the double-SHA256 of the serialized
+filter. Filter headers are 32-byte hashes derived for each block filter. They
+are computed as the double-SHA256 of the concatenation of the filter hash with
+the previous filter header. The previous filter header used to calculate that of
+the genesis block is defined to be the 32-byte array of 0's.
+
+=== New Messages ===
+
+==== getcfilters ====
+<code>getcfilters</code> is used to request the compact filters of a particular
+type for a particular range of blocks. The message contains the following
+fields:
+
+{| class="wikitable"
+! Field Name
+! Data Type
+! Byte Size
+! Description
+|-
+| FilterType
+| byte
+| 1
+| Filter type for which headers are requested
+|-
+| StartHeight
+| uint32
+| 4
+| The height of the first block in the requested range
+|-
+| StopHash
+| [32]byte
+| 32
+| The hash of the last block in the requested range
+|}
+
+# Nodes SHOULD NOT send <code>getcfilters</code> unless the peer has signaled support for this filter type. Nodes receiving <code>getcfilters</code> with an unsupported filter type SHOULD NOT respond.
+# StopHash MUST be known to belong to a block accepted by the receiving peer. This is the case if the peer had previously sent a <code>headers</code> or <code>inv</code> message with that block or any descendents. A node that receives <code>getcfilters</code> with an unknown StopHash SHOULD NOT respond.
+# The height of the block with hash StopHash MUST be greater than or equal to StartHeight, and the difference MUST be strictly less than 1,000.
+# The receiving node MUST respond to valid requests by sending one <code>cfilter</code> message for each block in the requested range, sequentially in order by block height.
+
+==== cfilter ====
+<code>cfilter</code> is sent in response to <code>getcfilters</code>, one for
+each block in the requested range. The message contains the following fields:
+
+{| class="wikitable"
+! Field Name
+! Data Type
+! Byte Size
+! Description
+|-
+| FilterType
+| byte
+| 1
+| Byte identifying the type of filter being returned
+|-
+| BlockHash
+| [32]byte
+| 32
+| Block hash of the Bitcoin block for which the filter is being returned
+|-
+| NumFilterBytes
+| CompactSize
+| 1-5
+| A variable length integer representing the size of the filter in the following field
+|-
+| FilterBytes
+| []byte
+| NumFilterBytes
+| The serialized compact filter for this block
+|}
+
+# The FilterType SHOULD match the field in the <code>getcfilters</code> request, and BlockHash must correspond to a block that is an ancestor of StopHash with height greater than or equal to StartHeight.
+
+==== getcfheaders ====
+<code>getcfheaders</code> is used to request verifiable filter headers for a
+range of blocks. The message contains the following fields:
+
+{| class="wikitable"
+! Field Name
+! Data Type
+! Byte Size
+! Description
+|-
+| FilterType
+| byte
+| 1
+| Filter type for which headers are requested
+|-
+| StartHeight
+| uint32
+| 4
+| The height of the first block in the requested range
+|-
+| StopHash
+| [32]byte
+| 32
+| The hash of the last block in the requested range
+|}
+
+# Nodes SHOULD NOT send <code>getcfheaders</code> unless the peer has signaled support for this filter type. Nodes receiving <code>getcfheaders</code> with an unsupported filter type SHOULD NOT respond.
+# StopHash MUST be known to belong to a block accepted by the receiving peer. This is the case if the peer had previously sent a <code>headers</code> or <code>inv</code> message with that block or any descendents. A node that receives <code>getcfheaders</code> with an unknown StopHash SHOULD NOT respond.
+# The height of the block with hash StopHash MUST be greater than or equal to StartHeight, and the difference MUST be strictly less than 2,000.
+
+==== cfheaders ====
+<code>cfheaders</code> is sent in response to <code>getcfheaders</code>. Instead
+of including the filter headers themselves, the response includes one filter
+header and a sequence of filter hashes, from which the headers can be derived.
+This has the benefit that the client can verify the binding links between the
+headers. The message contains the following fields:
+
+{| class="wikitable"
+! Field Name
+! Data Type
+! Byte Size
+! Description
+|-
+| FilterType
+| byte
+| 1
+| Filter type for which hashes are requested
+|-
+| StopHash
+| [32]byte
+| 32
+| The hash of the last block in the requested range
+|-
+| PreviousFilterHeader
+| [32]byte
+| 32
+| The filter header preceding the first block in the requested range
+|-
+| FilterHashesLength
+| CompactSize
+| 1-3
+| The length of the following vector of filter hashes
+|-
+| FilterHashes
+| [][32]byte
+| FilterHashesLength * 32
+| The filter hashes for each block in the requested range
+|}
+
+# The FilterType and StopHash SHOULD match the fields in the <code>getcfheaders</code> request.
+# FilterHashesLength MUST NOT be greater than 2,000.
+# FilterHashes MUST have one entry for each block on the chain terminating with tip StopHash, starting with the block at height StartHeight. The entries MUST be the filter hashes of the given type for each block in that range, in ascending order by height.
+# PreviousFilterHeader MUST be set to the previous filter header of first block in the requested range.
+
+==== getcfcheckpt ====
+<code>getcfcheckpt</code> is used to request filter headers at evenly spaced
+intervals over a range of blocks. Clients may use filter hashes from
+<code>getcfheaders</code> to connect these checkpoints, as is described in the
+[[#client-operation|Client Operation]] section below. The
+<code>getcfcheckpt</code> message contains the following fields:
+
+{| class="wikitable"
+! Field Name
+! Data Type
+! Byte Size
+! Description
+|-
+| FilterType
+| byte
+| 1
+| Filter type for which headers are requested
+|-
+| StopHash
+| [32]byte
+| 32
+| The hash of the last block in the chain that headers are requested for
+|}
+
+# Nodes SHOULD NOT send <code>getcfcheckpt</code> unless the peer has signaled support for this filter type. Nodes receiving <code>getcfcheckpt</code> with an unsupported filter type SHOULD NOT respond.
+# StopHash MUST be known to belong to a block accepted by the receiving peer. This is the case if the peer had previously sent a <code>headers</code> or <code>inv</code> message with any descendent blocks. A node that receives <code>getcfcheckpt</code> with an unknown StopHash SHOULD NOT respond.
+
+==== cfcheckpt ====
+<code>cfcheckpt</code> is sent in response to <code>getcfcheckpt</code>. The
+filter headers included are the set of all filter headers on the requested chain
+where the height is a positive multiple of 1,000. The message contains the
+following fields:
+
+{| class="wikitable"
+! Field Name
+! Data Type
+! Byte Size
+! Description
+|-
+| FilterType
+| byte
+| 1
+| Filter type for which headers are requested
+|-
+| StopHash
+| [32]byte
+| 32
+| The hash of the last block in the chain that headers are requested for
+|-
+| FilterHeadersLength
+| CompactSize
+| 1-3
+| The length of the following vector of filter headers
+|-
+| FilterHeaders
+| [][32]byte
+| FilterHeadersLength * 32
+| The filter headers at intervals of 1,000
+|}
+
+# The FilterType and StopHash SHOULD match the fields in the <code>getcfcheckpt</code> request.
+# FilterHeaders MUST have exactly one entry for each block on the chain terminating in StopHash, where the block height is a multiple of 1,000 greater than 0. The entries MUST be the filter headers of the given type for each such block, in ascending order by height.
+
+=== Node Operation ===
+
+Full nodes MAY opt to support this BIP and generate filters for any of the
+specified filter types. Such nodes SHOULD treat the filters as an additional
+index of the blockchain. For each new block that is connected to the main chain,
+nodes SHOULD generate filters for all supported types and persist them. Nodes
+that are missing filters and are already synced with the blockchain SHOULD
+reindex the chain upon start-up, constructing filters for each block from
+genesis to the current tip. They also SHOULD keep every checkpoint header in
+memory, so that <code>getcfcheckpt</code> requests do not result in many
+random-access disk reads.
+
+Nodes SHOULD NOT generate filters dynamically on request, as malicious peers may
+be able to perform DoS attacks by requesting small filters derived from large
+blocks. This would require an asymmetical amount of I/O on the node to compute
+and serve, similar to attacks against BIP 37 enabled nodes noted in BIP 111.
+
+Nodes MAY prune block data after generating and storing all filters for a block.
+
+=== Client Operation ===
+
+This section provides recommendations for light clients to download filters with
+maximal security.
+
+Clients SHOULD first sync the entire block header chain from peers using the
+standard headers-first syncing mechanism before downloading any block filters or
+filter headers. Clients configured with trusted checkpoints MAY only sync
+headers started from the last checkpoint. Clients SHOULD disconnect any outbound
+peers whose best chain has significantly less work than the known longest
+proof-of-work chain.
+
+Once a client's block headers are in sync, it SHOULD download and verify filter
+headers for all blocks and filter types that it might later download. The client
+SHOULD send <code>getcfheaders</code> messages to peers and derive and store the
+filter headers for each block. The client MAY first fetch headers at evenly
+spaced intervals of 1,000 by sending <code>getcfcheckpt</code>. The header
+checkpoints allow the client to download filter headers for different intervals
+from multiple peers in parallel, verifying each range of 1,000 headers against
+the checkpoints.
+
+Unless securely connected to a trusted peer that is serving filter headers, the
+client SHOULD connect to multiple outbound peers that support each filter type
+to mitigate the risk of downloading incorrect headers. If the client receives
+conflicting filter headers from different peers for any block and filter type,
+it SHOULD interrogate them to determine which is faulty. The client SHOULD use
+<code>getcfheaders</code> and/or <code>getcfcheckpt</code> to first identify
+the first filter headers that the peers disagree on. The client then SHOULD
+download the full block from any peer and derive the correct filter and filter
+header. The client SHOULD ban any peers that sent a filter header that does not
+match the computed one.
+
+Once the client has downloaded and verified all filter headers needed, ''and''
+no outbound peers have sent conflicting headers, the client can download the
+actual block filters it needs. The client MAY backfill filter headers before the
+first verified one at this point if it only downloaded them starting at a later
+point. Clients SHOULD persist the verified filter headers for last 100 blocks in
+the chain (or whatever finality depth is desired), to compare against headers
+received from new peers after restart. They MAY store more filter headers to
+avoid redownloading them if a rescan is later necessary.
+
+Starting from the first block in the desired range, the client now MAY download
+the filters. The client SHOULD test that each filter links to its corresponding
+filter header and ban peers that send incorrect filters. The client MAY download
+multiple filters at once to increase throughput, though it SHOULD test the
+filters sequentially. The client MAY check if a filter is empty before
+requesting it by checking if the filter header commits to the hash of the empty
+filter, saving a round trip if that is the case.
+
+Each time a new valid block header is received, the client SHOULD request the
+corresponding filter headers from all eligible peers. If two peers send
+conflicting filter headers, the client should interrogate them as described
+above and ban any peers that send an invalid header.
+
+If a client is fetching full blocks from the P2P network, they SHOULD be downloaded
+from outbound peers at random to mitigate privacy loss due to transaction
+intersection analysis. Note that blocks may be downloaded from peers that do not
+support this BIP.
+
+== Rationale ==
+
+The filter headers and checkpoints messages are defined to help clients identify
+the correct filter for a block when connected to peers sending conflicting
+information. An alternative solution is to require Bitcoin blocks to include
+commitments to derived block filters, so light clients can verify authenticity
+given block headers and some additional witness data. This would require a
+network-wide change to the Bitcoin consensus rules, however, whereas this
+document proposes a solution purely at the P2P layer.
+
+The constant interval of 1,000 blocks between checkpoints was chosen so that,
+given the current chain height and rate of growth, the size of a
+<code>cfcheckpt</code> message is not drastically from a
+<code>cfheaders</code> between two checkpoints. Also, 1,000 is a nice round
+number, at least to those of us who think in decimal.
+
+== Compatibility ==
+
+This light client mode is not compatible with current node deployments and
+requires support for the new P2P messages. The node implementation of this
+proposal is not incompatible with the current P2P network rules (ie. doesn't
+affect network topology of full nodes). Light clients may adopt protocols based
+on this as an alternative to the existing BIP 37. Adoption of this BIP may
+result in reduced network support for BIP 37.
+
+== Acknowledgments ==
+
+We would like to thank bfd (from the bitcoin-dev mailing list) for bringing the
+basis of this BIP to our attention, Joseph Poon for suggesting the filter header
+chain scheme, and Pedro Martelletto for writing the initial indexing code for
+<code>btcd</code>.
+
+We would also like to thank Dave Collins, JJ Jeffrey, Eric Lombrozo, and Matt
+Corallo for useful discussions.
+
+== Reference Implementation ==
+
+Light client: [https://github.com/lightninglabs/neutrino]
+
+Full-node indexing: https://github.com/Roasbeef/btcd/tree/segwit-cbf
+
+Golomb-Rice Coded sets: https://github.com/Roasbeef/btcutil/tree/gcs/gcs
+
+== References ==
+
+<references/>
+
+== Copyright ==
+
+This document is licensed under the Creative Commons CC0 1.0 Universal lisence.
diff --git a/bip-0158.mediawiki b/bip-0158.mediawiki
new file mode 100644
index 0000000..dc28154
--- /dev/null
+++ b/bip-0158.mediawiki
@@ -0,0 +1,431 @@
+<pre>
+ BIP: 158
+ Layer: Peer Services
+ Title: Compact Block Filters for Light Clients
+ Author: Olaoluwa Osuntokun <laolu32@gmail.com>
+ Alex Akselrod <alex@akselrod.org>
+ Comments-Summary: None yet
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0158
+ Status: Draft
+ Type: Standards Track
+ Created: 2017-05-24
+ License: CC0-1.0
+</pre>
+
+
+== Abstract ==
+
+This BIP describes a structure for compact filters on block data, for use in the
+BIP 157 light client protocol<ref>bip-0157.mediawiki</ref>. The filter
+construction proposed is an alternative to Bloom filters, as used in BIP 37,
+that minimizes filter size by using Golomb-Rice coding for compression. This
+document specifies two initial types of filters based on this construction that
+enables basic wallets and applications with more advanced smart contracts.
+
+== Motivation ==
+
+[[bip-0157.mediawiki|BIP 157]] defines a light client protocol based on
+deterministic filters of block content. The filters are designed to
+minimize the expected bandwidth consumed by light clients, downloading filters
+and full blocks. This document defines two initial filter types, ''basic'' and
+''extended'', to provide support for advanced applications while reducing the
+filter size for regular wallets.
+
+== Definitions ==
+
+<code>[]byte</code> represents a vector of bytes.
+
+<code>[N]byte</code> represents a fixed-size byte array with length N.
+
+''CompactSize'' is a compact encoding of unsigned integers used in the Bitcoin
+P2P protocol.
+
+''Data pushes'' are byte vectors pushed to the stack according to the rules of
+Bitcoin script.
+
+''Bit streams'' are readable and writable streams of individual bits. The
+following functions are used in the pseudocode in this document:
+* <code>new_bit_stream</code> instantiates a new writable bit stream
+* <code>new_bit_stream(vector)</code> instantiates a new bit stream reading data from <code>vector</code>
+* <code>write_bit(stream, b)</code> appends the bit <code>b</code> to the end of the stream
+* <code>read_bit(stream)</code> reads the next available bit from the stream
+* <code>write_bits_big_endian(stream, n, k)</code> appends the <code>k</code> least significant bits of integer <code>n</code> to the end of the stream in big-endian bit order
+* <code>read_bits_big_endian(stream, k)</code> reads the next available
+* <code>k</code> bits from the stream and interprets them as the least significant bits of a big-endian integer
+
+The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
+"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
+interpreted as described in RFC 2119.
+
+== Specification ==
+
+=== Golomb-Coded Sets ===
+
+For each block, compact filters are derived containing sets of items associated
+with the block (eg. addresses sent to, outpoints spent, etc.). A set of such
+data objects is compressed into a probabilistic structure called a
+''Golomb-coded set'' (GCS), which matches all items in the set with probability
+1, and matches other items with probability <code>2^(-P)</code> for some integer
+parameter <code>P</code>.
+
+At a high level, a GCS is constructed from a set of <code>N</code> items by:
+# hashing all items to 64-bit integers in the range <code>[0, N * 2^P)</code>
+# sorting the hashed values in ascending order
+# computing the differences between each value and the previous one
+# writing the differences sequentially, compressed with Golomb-Rice coding
+
+The following sections describe each step in greater detail.
+
+==== Hashing Data Objects ====
+
+The first step in the filter construction is hashing the variable-sized raw
+items in the set to the range <code>[0, F)</code>, where <code>F = N *
+2^P</code>. Set membership queries against the hash outputs will have a false
+positive rate of <code>2^(-P)</code>. To avoid integer overflow, the number of
+items <code>N</code> MUST be <2^32 and <code>P</code> MUST be <=32.
+
+The items are first passed through the pseudorandom function ''SipHash'', which
+takes a 128-bit key <code>k</code> and a variable-sized byte vector and produces
+a uniformly random 64-bit output. Implementations of this BIP MUST use the
+SipHash parameters <code>c = 2</code> and <code>d = 4</code>.
+
+The 64-bit SipHash outputs are then mapped uniformly over the desired range by
+multiplying with F and taking the top 64 bits of the 128-bit result. This
+algorithm is a faster alternative to modulo reduction, as it avoids the
+expensive division
+operation<ref>https://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/</ref>.
+Note that care must be taken when implementing this reduction to ensure the
+upper 64 bits of the integer multiplication are not truncated; certain
+architectures and high level languages may require code that decomposes the
+64-bit multiplication into four 32-bit multiplications and recombines into the
+result.
+
+<pre>
+hash_to_range(item: []byte, F: uint64, k: [16]byte) -> uint64:
+ return (siphash(k, item) * F) >> 64
+
+hashed_set_construct(raw_items: [][]byte, P: uint, k: [16]byte) -> []uint64:
+ let N = len(raw_items)
+ let F = N << P
+
+ let set_items = []
+
+ for item in raw_items:
+ let set_value = hash_to_range(item, F, k)
+ set_items.append(set_value)
+
+ return set_items
+</pre>
+
+==== Golomb-Rice Coding ====
+
+Instead of writing the items in the hashed set directly to the filter, greater
+compression is achieved by only writing the differences between successive
+items in sorted order. Since the items are distributed uniformly, it can be
+shown that the differences resemble a geometric
+distribution<ref>https://en.wikipedia.org/wiki/Geometric_distribution</ref>.
+''Golomb-Rice''
+''coding''<ref>https://en.wikipedia.org/wiki/Golomb_coding#Rice_coding</ref>
+is a technique that optimally compresses geometrically distributed values.
+
+With Golomb-Rice, a value is split into a quotient and remainder modulo
+<code>2^P</code>, which are encoded separately. The quotient <code>q</code> is
+encoded as ''unary'', with a string of <code>q</code> 1's followed by one 0. The
+remainder <code>r</code> is represented in big-endian by P bits. For example,
+this is a table of Golomb-Rice coded values using <code>P=2</code>:
+
+{| class="wikitable"
+! n !! (q, r) !! c
+|-
+| 0 || (0, 0) || <code>0 00</code>
+|-
+| 1 || (0, 1) || <code>0 01</code>
+|-
+| 2 || (0, 2) || <code>0 10</code>
+|-
+| 3 || (0, 3) || <code>0 11</code>
+|-
+| 4 || (1, 0) || <code>10 00</code>
+|-
+| 5 || (1, 1) || <code>10 01</code>
+|-
+| 6 || (1, 2) || <code>10 10</code>
+|-
+| 7 || (1, 3) || <code>10 11</code>
+|-
+| 8 || (2, 0) || <code>110 00</code>
+|-
+| 9 || (2, 1) || <code>110 01</code>
+|}
+
+<pre>
+golomb_encode(stream, x: uint64, P: uint):
+ let q = x >> P
+
+ while q > 0:
+ write_bit(stream, 1)
+ q--
+ write_bit(stream, 0)
+
+ write_bits_big_endian(stream, x, P)
+
+golomb_decode(stream, P: uint) -> uint64:
+ let q = 0
+ while read_bit(stream) == 1:
+ q++
+
+ let r = read_bits_big_endian(stream, P)
+
+ let x = (q << P) + r
+ return x
+</pre>
+
+==== Set Construction ====
+
+A GCS is constructed from three parameters:
+* <code>L</code>, a vector of <code>N</code> raw items
+* <code>P</code>, which determines the false positive rate
+* <code>k</code>, the 128-bit key used to randomize the SipHash outputs
+
+The result is a byte vector with a minimum size of <code>N * (P + 1)</code>
+bits.
+
+The raw items in <code>L</code> are first hashed to 64-bit unsigned integers as
+specified above and sorted. The differences between consecutive values,
+hereafter referred to as ''deltas'', are encoded sequentially to a bit stream
+with Golomb-Rice coding. Finally, the bit stream is padded with 0's to the
+nearest byte boundary and serialized to the output byte vector.
+
+<pre>
+construct_gcs(L: [][]byte, P: uint, k: [16]byte) -> []byte:
+ let set_items = hashed_set_construct(L, P, k)
+
+ set_items.sort()
+
+ let output_stream = new_bit_stream()
+
+ let last_value = 0
+ for item in set_items:
+ let delta = item - last_value
+ golomb_encode(output_stream, delta, P)
+ last_value = item
+
+ return output_stream.bytes()
+</pre>
+
+==== Set Querying/Decompression ====
+
+To check membership of an item in a compressed GCS, one must reconstruct the
+hashed set members from the encoded deltas. The procedure to do so is the
+reverse of the compression: deltas are decoded one by one and added to a
+cumulative sum. Each intermediate sum represents a hashed value in the original
+set. The queried item is hashed in the same way as the set members and compared
+against the reconstructed values. Note that querying does not require the entire
+decompressed set be held in memory at once.
+
+<pre>
+gcs_match(key: [16]byte, compressed_set: []byte, target: []byte, P: uint, N: uint) -> bool:
+ let F = N << P
+ let target_hash = hash_to_range(target, F, k)
+
+ stream = new_bit_stream(compressed_set)
+
+ let last_value = 0
+
+ loop N times:
+ let delta = golomb_decode(stream, P)
+ let set_item = last_value + delta
+
+ if set_item == target_hash:
+ return true
+
+ // Since the values in the set are sorted, terminate the search once
+ // the decoded value exceeds the target.
+ if set_item > target_hash:
+ break
+
+ last_value = set_item
+
+ return false
+</pre>
+
+Some applications may need to check for set intersection instead of membership
+of a single item. This can be performed far more efficiently than checking each
+item individually by leveraging the sorted structure of the compressed GCS.
+First the query elements are all hashed and sorted, then compared in order
+against the decompressed GCS contents. See
+[[#golomb-coded-set-multi-match|Appendix B]] for pseudocode.
+
+=== Block Filters ===
+
+This BIP defines two initial filter types:
+* Basic (<code>0x00</code>)
+* Extended (<code>0x01</code>)
+
+==== Contents ====
+
+The basic filter is designed to contain everything that a light client needs to
+sync a regular Bitcoin wallet. A basic filter MUST contain exactly the following
+items for each transaction in a block:
+* The outpoint of each input, except for the coinbase transaction
+* Each data push in the scriptPubKey of each output, ''only if'' the scriptPubKey is parseable
+* The <code>txid</code> of the transaction itself
+
+The extended filter contains extra data that is meant to enable applications
+with more advanced smart contracts. An extended filter MUST contain exactly the
+following items for each transaction in a block ''except the coinbase'':
+* Each item within the witness stack of each input (if the input has a witness)
+* Each data push in the scriptSig of each input
+
+Note that neither filter type interprets P2SH scripts or witness scripts to
+extract data pushes from them. If necessary, future filter types may be designed
+to do so.
+
+==== Construction ====
+
+Both the basic and extended filter types are constructed as Golomb-coded sets
+with the following parameters.
+
+The parameter <code>P</code> MUST be set to <code>20</code>. This value was
+chosen as simulations show that it minimizes the bandwidth utilized, considering
+both the expected number of blocks downloaded due to false positives and the
+size of the filters themselves. The code along with a demo used for the
+parameter tuning can be found
+[https://github.com/Roasbeef/bips/blob/83b83c78e189be898573e0bfe936dd0c9b99ecb9/gcs_light_client/gentestvectors.go here].
+
+The parameter <code>k</code> MUST be set to the first 16 bytes of the hash of
+the block for which the filter is constructed. This ensures the key is
+deterministic while still varying from block to block.
+
+Since the value <code>N</code> is required to decode a GCS, a serialized GCS
+includes it as a prefix, written as a CompactSize. Thus, the complete
+serialization of a filter is:
+* <code>N</code>, encoded as a CompactSize
+* The bytes of the compressed filter itself
+
+==== Signaling ====
+
+This BIP allocates a new service bit:
+
+{| class="wikitable"
+|-
+| NODE_COMPACT_FILTERS
+| style="white-space: nowrap;" | <code>1 << 6</code>
+| If enabled, the node MUST respond to all BIP 157 messages for filter types <code>0x00</code> and <code>0x01</code>
+|}
+
+== Compatibility ==
+
+This block filter construction is not incompatible with existing software,
+though it requires implementation of the new filters.
+
+== Acknowledgments ==
+
+We would like to thank bfd (from the bitcoin-dev mailing list) for bringing the
+basis of this BIP to our attention, Greg Maxwell for pointing us in the
+direction of Golomb-Rice coding and fast range optimization, and Pedro
+Martelletto for writing the initial indexing code for <code>btcd</code>.
+
+We would also like to thank Dave Collins, JJ Jeffrey, and Eric Lombrozo for
+useful discussions.
+
+== Reference Implementation ==
+
+Light client: [https://github.com/lightninglabs/neutrino]
+
+Full-node indexing: https://github.com/Roasbeef/btcd/tree/segwit-cbf
+
+Golomb-Rice Coded sets: https://github.com/Roasbeef/btcutil/tree/gcs/gcs
+
+== Appendix A: Alternatives ==
+
+A number of alternative set encodings were considered before Golomb-coded
+sets were settled upon. In this appendix section, we'll list a few of the
+alternatives along with our rationale for not pursuing them.
+
+==== Bloom Filters ====
+
+Bloom Filters are perhaps the best known probabilistic data structure for
+testing set membership, and were introduced into the Bitcoin protocol with BIP
+37. The size of a Bloom filter is larger than the expected size of a GCS with
+the same false positive rate, which is the main reason the option was rejected.
+
+==== Cryptographic Accumulators ====
+
+Cryptographic
+accumulators<ref>https://en.wikipedia.org/wiki/Accumulator_(cryptography)</ref>
+are a cryptographic data structures that enable (amongst other operations) a one
+way membership test. One advantage of accumulators are that they are constant
+size, independent of the number of elements inserted into the accumulator.
+However, current constructions of cryptographic accumulators require an initial
+trusted set up. Additionally, accumulators based on the Strong-RSA Assumption
+require mapping set items to prime representatives in the associated group which
+can be preemptively expensive.
+
+==== Matrix Based Probabilistic Set Data Structures ====
+
+There exist data structures based on matrix solving which are even more space
+efficient compared to Bloom
+filters<ref>https://arxiv.org/pdf/0804.1845.pdf</ref>. We instead opted for our
+GCS-based filters as they have a much lower implementation complexity and are
+easier to understand.
+
+== Appendix B: Pseudocode ==
+
+=== Golomb-Coded Set Multi-Match ===
+
+<pre>
+gcs_match_any(key: [16]byte, compressed_set: []byte, targets: [][]byte, P: uint, N: uint) -> bool:
+ let F = N << P
+
+ // Map targets to the same range as the set hashes.
+ let target_hashes = []
+ for target in targets:
+ let target_hash = hash_to_range(target, F, k)
+ target_hashes.append(target_hash)
+
+ // Sort targets so matching can be checked in linear time.
+ target_hashes.sort()
+
+ stream = new_bit_stream(compressed_set)
+
+ let value = 0
+ let target_idx = 0
+ let target_val = target_hashes[target_idx]
+
+ loop N times:
+ let delta = golomb_decode(stream, P)
+ value += delta
+
+ inner loop:
+ if target_val == value:
+ return true
+
+ // Move on to the next set value.
+ else if target_val > value:
+ break inner loop
+
+ // Move on to the next target value.
+ else if target_val < value:
+ target_idx++
+
+ // If there are no targets left, then there are no matches.
+ if target_idx == len(targets):
+ break outer loop
+
+ target_val = target_hashes[target_idx]
+
+ return false
+</pre>
+
+== Appendix C: Test Vectors ==
+
+TODO: To be generated.
+
+== References ==
+
+<references/>
+
+== Copyright ==
+
+This document is licensed under the Creative Commons CC0 1.0 Universal lisence.
diff --git a/bip-0159.mediawiki b/bip-0159.mediawiki
index 2253922..a2c71e1 100644
--- a/bip-0159.mediawiki
+++ b/bip-0159.mediawiki
@@ -1,7 +1,7 @@
<pre>
BIP: 159
Layer: Peer Services
- Title: NODE_NETWORK_LIMITED service bits
+ Title: NODE_NETWORK_LIMITED service bit
Author: Jonas Schnelli <dev@jonasschnelli.ch>
Comments-Summary: No comments yet.
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0159
@@ -13,7 +13,7 @@
== Abstract ==
-Define service bits that allow pruned peers to signal their limited services
+Define a service bit that allow pruned peers to signal their limited services
==Motivation==
@@ -21,36 +21,34 @@ Pruned peers can offer the same services as traditional peer except of serving a
Bitcoin right now only offers the NODE_NETWORK service bit which indicates that a peer can serve
all historical blocks.
# Pruned peers can relay blocks, headers, transactions, addresses and can serve a limited number of historical blocks, thus they should have a way how to announce their service(s)
-# Peers no longer in initial block download should consider connection some of its outbound connections to pruned peers to allow other peers to bootstrap from non-pruned peers
+# Peers no longer in initial block download should consider connecting some of its outbound connections to pruned peers to allow other peers to bootstrap from non-pruned peers
== Specification ==
-=== New service bits ===
+=== New service bit ===
-This BIP proposes two new service bits
+This BIP proposes a new service bit
{|class="wikitable"
|-
-| NODE_NETWORK_LIMITED_LOW || bit 10 (0x400) || If signaled, the peer <I>MUST</I> be capable of serving at least the last 288 blocks (~2 day / the current minimum limit for Bitcoin Core).
-|-
-| NODE_NETWORK_LIMITED_HIGH || bit 11 (0x800) || If signaled, the peer <I>MUST</i> be capable of serving at least the last 1152 blocks (~8 days)
+| NODE_NETWORK_LIMITED || bit 10 (0x400) || If signaled, the peer <I>MUST</I> be capable of serving at least the last 288 blocks (~2 days).
|}
-A safety buffer of additional 144 blocks to handle chain reorganizations <I>SHOULD</I> be taken into account when connecting to a peer signaling <code>NODE_NETWORK_LIMITED_*</code> service bits.
+A safety buffer of 144 blocks to handle chain reorganizations <I>SHOULD</I> be taken into account when connecting to a peer signaling the <code>NODE_NETWORK_LIMITED</code> service bit.
=== Address relay ===
-Full nodes following this BIP <I>SHOULD</I> relay address/services (<code>addr</code> message) from peers they would connect to (including peers signaling <code>NODE_NETWORK_LIMITED_*</code>).
+Full nodes following this BIP <I>SHOULD</I> relay address/services (<code>addr</code> message) from peers they would connect to (including peers signaling <code>NODE_NETWORK_LIMITED</code>).
=== Counter-measures for peer fingerprinting ===
-Peers may have different prune depths (depending on the peers configuration, disk space, etc.) which can result in a fingerprinting weakness (finding the prune depth through getdata requests). NODE_NETWORK_LIMITED supporting peers <I>SHOULD</I> avoid leaking the prune depth and therefore not serve blocks deeper then the signaled <code>NODE_NETWORK_LIMITED_*</code> thresholds.
+Peers may have different prune depths (depending on the peers configuration, disk space, etc.) which can result in a fingerprinting weakness (finding the prune depth through getdata requests). NODE_NETWORK_LIMITED supporting peers <I>SHOULD</I> avoid leaking the prune depth and therefore not serve blocks deeper than the signaled <code>NODE_NETWORK_LIMITED</code> threshold (244 blocks).
=== Risks ===
Pruned peers following this BIP may consume more outbound bandwidth.
-Light clients (and such) who are not checking the <code>nServiceFlags</code> (service bits) from a relayed <code>addr</code>-message may unwillingly connect to a pruned peer and ask for (filtered) blocks at a depth below their pruned depth. Light clients should therefore check the service bits (and eventually connect to peers signaling <code>NODE_NETWORK_LIMITED_*</code> if they require [filtered] blocks around the tip). Light clients obtaining peer IPs though DNS seed should use the DNS filtering option.
+Light clients (and such) who are not checking the <code>nServiceFlags</code> (service bits) from a relayed <code>addr</code>-message may unwillingly connect to a pruned peer and ask for (filtered) blocks at a depth below their pruned depth. Light clients should therefore check the service bits (and eventually connect to peers signaling <code>NODE_NETWORK_LIMITED</code> if they require [filtered] blocks around the tip). Light clients obtaining peer IPs though DNS seed should use the DNS filtering option.
== Compatibility ==
@@ -58,10 +56,8 @@ This proposal is backward compatible.
== Reference implementation ==
-* https://github.com/bitcoin/bitcoin/pull/10387
-
-
-== References ==
+* https://github.com/bitcoin/bitcoin/pull/11740 (signaling)
+* https://github.com/bitcoin/bitcoin/pull/10387 (connection and relay)
== Copyright ==
diff --git a/bip-0173.mediawiki b/bip-0173.mediawiki
index 425b8d7..c2178b5 100644
--- a/bip-0173.mediawiki
+++ b/bip-0173.mediawiki
@@ -6,9 +6,9 @@
Greg Maxwell <greg@xiph.org>
Comments-Summary: No comments yet.
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0173
- Status: Draft
+ Status: Proposed
Type: Informational
- Created: 2016-03-20
+ Created: 2017-03-20
License: BSD-2-Clause
Replaces: 142
</pre>
@@ -76,7 +76,7 @@ increase, but that does not matter when copy-pasting addresses.</ref> format cal
A Bech32<ref>'''Why call it Bech32?''' "Bech" contains the characters BCH (the error
detection algorithm used) and sounds a bit like "base".</ref> string is at most 90 characters long and consists of:
-* The '''human-readable part''', which is intended to convey the type of data or anything else that is relevant for the reader. Its validity (including the used set of characters) is application specific, but restricted to ASCII characters with values in the range 33-126.
+* The '''human-readable part''', which is intended to convey the type of data, or anything else that is relevant to the reader. This part MUST contain 1 to 83 US-ASCII characters, with each character having a value in the range [33-126]. HRP validity may be further restricted by specific applications.
* The '''separator''', which is always "1". In case "1" is allowed inside the human-readable part, the last one in the string is the separator<ref>'''Why include a separator in addresses?''' That way the human-readable
part is unambiguously separated from the data part, avoiding potential
collisions with other human-readable parts that share a prefix. It also
@@ -153,7 +153,7 @@ guarantees detection of '''any error affecting at most 4 characters'''
and has less than a 1 in 10<sup>9</sup> chance of failing to detect more
errors. More details about the properties can be found in the
Checksum Design appendix. The human-readable part is processed by first
-feeding the higher bits of each character's ASCII value into the
+feeding the higher bits of each character's US-ASCII value into the
checksum calculation followed by a zero and then the lower bits of each<ref>'''Why are the high bits of the human-readable part processed first?'''
This results in the actually checksummed data being ''[high hrp] 0 [low hrp] [data]''. This means that under the assumption that errors to the
human readable part only change the low 5 bits (like changing an alphabetical character into another), errors are restricted to the ''[low hrp] [data]''
@@ -182,11 +182,15 @@ to make.
'''Uppercase/lowercase'''
-Decoders MUST accept both uppercase and lowercase strings, but
-not mixed case. The lowercase form is used when determining a character's
-value for checksum purposes. For presentation, lowercase is usually
-preferable, but inside QR codes uppercase SHOULD be used, as those permit
-the use of
+The lowercase form is used when determining a character's value for checksum purposes.
+
+Encoders MUST always output an all lowercase Bech32 string.
+If an uppercase version of the encoding result is desired, (e.g.- for presentation purposes, or QR code use),
+then an uppercasing procedure can be performed external to the encoding process.
+
+Decoders MUST NOT accept strings where some characters are uppercase and some are lowercase (such strings are referred to as mixed case strings).
+
+For presentation, lowercase is usually preferable, but inside QR codes uppercase SHOULD be used, as those permit the use of
''[http://www.thonky.com/qr-code-tutorial/alphanumeric-mode-encoding alphanumeric mode]'', which is 45% more compact than the normal
''[http://www.thonky.com/qr-code-tutorial/byte-mode-encoding byte mode]''.
@@ -262,22 +266,33 @@ P2PKH addresses can be used.
===Test vectors===
-The following strings have a valid Bech32 checksum.
+The following strings are valid Bech32:
* <tt>A12UEL5L</tt>
+* <tt>a12uel5l</tt>
* <tt>an83characterlonghumanreadablepartthatcontainsthenumber1andtheexcludedcharactersbio1tt5tgs</tt>
* <tt>abcdef1qpzry9x8gf2tvdw0s3jn54khce6mua7lmqqqxw</tt>
* <tt>11qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqc8247j</tt>
* <tt>split1checkupstagehandshakeupstreamerranterredcaperred2y9e3w</tt>
+* <tt>?1ezyfcl</tt> WARNING: During conversion to US-ASCII some encoders may set unmappable characters to a valid US-ASCII character, such as '?'. For example:
+
+<pre>
+>>> bech32_encode('\x80'.encode('ascii', 'replace').decode('ascii'), [])
+'?1ezyfcl'
+</pre>
-The following strings have an invalid Bech32 checksum (with reason for invalidity):
+The following string are not valid Bech32 (with reason for invalidity):
* 0x20 + <tt>1nwldj5</tt>: HRP character out of range
* 0x7F + <tt>1axkwrx</tt>: HRP character out of range
+* 0x80 + <tt>1eym55h</tt>: HRP character out of range
* <tt>an84characterslonghumanreadablepartthatcontainsthenumber1andtheexcludedcharactersbio1569pvx</tt>: overall max length exceeded
* <tt>pzry9x0s0muk</tt>: No separator character
* <tt>1pzry9x0s0muk</tt>: Empty HRP
* <tt>x1b4n0q5v</tt>: Invalid data character
* <tt>li1dgmt3</tt>: Too short checksum
* <tt>de1lg7wt</tt> + 0xFF: Invalid character in checksum
+* <tt>A1G7SGD8</tt>: checksum calculated with uppercase form of HRP
+* <tt>10a06t8</tt>: empty HRP
+* <tt>1qzzfhee</tt>: empty HRP
The following list gives valid segwit addresses and the scriptPubKey that they
translate to in hex.
diff --git a/bip-0174.mediawiki b/bip-0174.mediawiki
new file mode 100644
index 0000000..e02faff
--- /dev/null
+++ b/bip-0174.mediawiki
@@ -0,0 +1,528 @@
+<pre>
+ BIP: 174
+ Layer: Applications
+ Title: Partially Signed Bitcoin Transaction Format
+ Author: Andrew Chow <achow101@gmail.com>
+ Comments-Summary: No comments yet.
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0174
+ Status: Draft
+ Type: Standards Track
+ Created: 2017-07-12
+ License: BSD-2-Clause
+</pre>
+
+==Introduction==
+
+===Abstract===
+
+This document proposes a binary transaction format which contains the information
+necessary for a signer to produce signatures for the transaction and holds the
+signatures for an input while the input does not have a complete set of signatures.
+The signer can be offline as all necessary information will be provided in the
+transaction.
+
+===Copyright===
+
+This BIP is licensed under the 2-clause BSD license.
+
+===Motivation===
+
+Creating unsigned or partially signed transactions to be passed around to multiple
+signers is currently implementation dependent, making it hard for people who use
+different wallet software from being able to easily do so. One of the goals of this
+document is to create a standard and extensible format that can be used between clients to allow
+people to pass around the same transaction to sign and combine their signatures. The
+format is also designed to be easily extended for future use which is harder to do
+with existing transaction formats.
+
+Signing transactions also requires users to have access to the UTXOs being spent. This transaction
+format will allow offline signers such as air-gapped wallets and hardware wallets
+to be able to sign transactions without needing direct access to the UTXO set and without
+risk of being defrauded.
+
+==Specification==
+
+The Partially Signed Bitcoin Transaction (PSBT) format consists of key-value maps.
+Each key-value pair must be unique within its scope; duplicates are not allowed.
+Each map consists of a sequence of records, terminated by a <tt>0x00</tt> byte <ref>'''Why
+is the separator here <tt>0x00</tt> instead of <tt>0xff</tt>?'''
+The separator here is used to distinguish between each chunk of data. A separator of 0x00 would mean that
+the unserializer can read it as a key length of 0, which would never occur with actual keys. It can thus
+be used as a separator and allow for easier unserializer implementation.</ref>. The format
+of a record is as follows:
+
+Note: <tt><..></tt> indicates that the data is prefixed by a compact size unsigned integer representing
+the length of that data. <tt>{..}</tt> indicates the raw data itself.
+
+<pre>
+<key>|<value>
+</pre>
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
+!Name
+!Type
+!Description
+|-
+| Key Length
+| Compact Size Unsigned Integer
+| Specify how long the key is
+|-
+| Key
+| byte[]
+| The key itself with the first byte being the type of the key-value pair
+|-
+| Value Length
+| Compact Size Unsigned Integer
+| Specify how long the value is
+|-
+| Value
+| byte[]
+| The Value itself
+|}
+
+The format of each key-value map is as follows:
+
+<pre>
+{key-value pair}|{key-value pair}|...|{0x00}
+</pre>
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
+!Field Size
+!Name
+!Type
+!Value
+!Description
+|-
+| 1+
+| Key-value pairs
+| Array of key-value pairs
+| varies
+| The key-value pairs.
+|-
+| 1
+| separator
+| char
+| <tt>0x00</tt>
+| Must be <tt>0x00</tt>.
+|}
+
+The first byte of each key specifies the type of the key-value pair. Some types are
+for global fields and other fields are for each input. The only required type in a
+PSBT is the transaction type, as defined below. The currently defined global types are as follows:
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller;
+table-layout: fixed;"
+!Number
+!Name
+!Key Data
+!Value Data
+!Format Example
+|-
+| <tt>0x00</tt>
+| Transaction
+| None. The key must only contain the 1 byte type.
+| The transaction in network serialization. The scriptSigs and
+witnesses for each input must be empty unless the input is complete. The transaction
+must be in the witness serialization format as defined in BIP 144. A PSBT must have
+a transaction, otherwise it is invalid.
+| Key:
+<pre>
+{0x00}
+</pre>
+Value:
+<pre>
+{transaction}
+</pre>
+|-
+| <tt>0x01</tt>
+| Redeem Script<ref>'''Why are redeem scripts and witness scripts global''' Redeem
+ scripts and witness scripts are global data to avoid duplication. Instead of specifying
+ a redeems script and witness script multiple times in inputs that need those scripts,
+ they are specified once in the global data.</ref>
+| The hash160 of the redeem script
+| A redeem script that will be needed to sign a Pay-To-Script-Hash input or is spent
+to by an output.<ref>'''Why are outputs' redeem scripts and witness scripts included?'''
+Redeem scripts and witness scripts spent to by an output in this transaction are included
+so that the user can verify that the transaction they are signing is creating the correct
+outputs that have the correct redeem and witness scripts. This is best used when the
+PSBT creator is not trusted by the signers.</ref>
+| Key:
+<pre>
+{0x01}|{hash160}
+</pre>
+Value:
+<pre>
+{redeem script}
+</pre>
+|-
+| <tt>0x02</tt>
+| Witness Script
+| The sha256 hash of the witness script
+| A witness script that will be needed to sign a Pay-To-Witness-Script-Hash input or is spent
+to by an output.
+| Key:
+<pre>
+{0x02}|{sha256}
+</pre>
+Value:
+<pre>
+{witness script}
+</pre>
+|-
+| <tt>0x03</tt>
+| BIP 32 Derivation path, public key, and Master Key fingerprint
+| The public key
+| The master key fingerprint concatenated with the derivation path of the public key. The
+derivation path is represented as 32 bit unsigned integer indexes concatenated
+with each other. This must omit the index of the master key.
+| Key:
+<pre>
+{0x03}|{public key}
+</pre>
+Value:
+<pre>
+{master key fingerprint}|{32-bit int}|...|{32-bit int}
+</pre>
+|-
+| <tt>0x04</tt>
+| Number of inputs provided in the PSBT
+| None. The key must only contain the 1 byte type.
+| A compact size unsigned integer representing the number of inputs that this PSBT has
+| Key:
+<pre>
+{0x04}
+</pre>
+Value:
+<pre>
+{number of inputs}
+</pre>
+|}
+
+The currently defined per-input types are defined as follows:
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller;
+table-layout: fixed;"
+!Number
+!Name
+!Key Data
+!Value Data
+!Format Example
+|-
+| <tt>0x00</tt>
+| Non-Witness UTXO
+| None. The key must only contain the 1 byte type.
+| The transaction in network serialization format the current input spends from.
+| Key:
+<pre>
+{0x00}
+</pre>
+Value:
+<pre>
+{transaction}
+</pre>
+|-
+| <tt>0x01</tt>
+| Witness UTXO
+| None. The key must only contain the 1 byte type.
+| The entire transaction output in network serialization which the current input spends from.
+| Key:
+<pre>
+{0x01}
+</pre>
+Value:
+<pre>
+{serialized transaction output({output value}|<scriptPubKey>)}
+</pre>
+|-
+| <tt>0x02</tt>
+| Partial Signature
+| The public key which corresponds to this signature.
+| The signature as would be pushed to the stack from a scriptSig or witness.
+| Key:
+<pre>
+{0x02}|{public key}
+</pre>
+Value:
+<pre>
+{signature}
+</pre>
+|-
+| <tt>0x03</tt>
+| Sighash Type
+| None. The key must only contain the 1 byte type.
+| The 32-bit unsigned integer recommending a sighash type to be used for this input.
+The sighash type is only a recommendation and the signer does not need to use
+the sighash type specified.
+| Key:
+<pre>
+{0x03}
+</pre>
+Value:
+<pre>
+{sighash type}
+</pre>
+|-
+| <tt>0x04</tt>
+| Input index
+| None. The key must only contain the 1 byte type.
+| A compact size unsigned integer representing the 0-based index of this input. This field
+is optional to allow for completed inputs to be skipped without needing a separator byte.
+If one input has this type, then all inputs must have it.
+| Key:
+<pre>
+{0x04}
+</pre>
+Value:
+<pre>
+{input index}
+</pre>
+|}
+
+The transaction format is specified as follows:
+
+
+<pre>
+ {0x70736274}|{0xff}|{global key-value map}|{input key-value map}|...|{input key-value map}
+</pre>
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
+!Field Size
+!Name
+!Type
+!Value
+!Description
+|-
+| 4
+| Magic Bytes
+| int32_t
+| <tt>0x70736274</tt>
+| Magic bytes which are ASCII for psbt. <ref>'''Why use 4 bytes for psbt?''' The
+transaction format needed to start with a 5 byte header which uniquely identifies
+it. The first bytes were chosen to be the ASCII for psbt because that stands for
+Partially Signed Bitcoin Transaction. </ref> This integer should be serialized
+in most significant byte order.
+|-
+| 1
+| separator
+| char
+| <tt>0xff</tt>
+| Must be <tt>0xff</tt> <ref>'''Why Use a separator after the magic bytes?''' The separator
+is part of the 5 byte header for PSBT. This byte is a separator of <tt>0xff</tt> because
+this will cause any non-PSBT unserializer to fail to properly unserialize the PSBT
+as a normal transaction. Likewise, since the 5 byte header is fixed, no transaction
+in the non-PSBT format will be able to be unserialized by a PSBT unserializer.</ref>
+|-
+| 1+
+| Global data
+| Key-value Map
+| varies
+| The key-value pairs for all global data.
+|-
+| 1+
+| Inputs
+| Array of key-value maps
+| varies
+| The key-value pairs for each input as described below
+|}
+
+Each block of data between separators can be viewed as a scope, and all separators
+are required<ref>'''Why are all separators required?''' The separators are required
+so that the unserializer knows which input it is unserializing data for.</ref>.
+Types can be skipped when they are unnecessary. For example, if an input is a witness
+input, then it should not have a Non-Witness UTXO key-value pair.
+
+If the signer encounters key-value pairs that it does not understand, it must
+pass those key-value pairs through when re-serializing the transaction.
+
+===Handling Duplicated Keys===
+
+Keys within each scope should never be duplicated; all keys in the format are unique. However implementors
+will still need to handle events where keys are duplicated, either duplicated in the transaction
+itself or when combining transactions with duplicated fields. If duplicated keys are
+encountered, the software may choose to use any of the values corresponding to that key.
+
+==Responsibilities==
+
+Using the transaction format involves many different responsibilities. These responsibilities can be handled by a single entity, but each responsibility is specialized in what it should be capable of doing.
+
+===Creator===
+
+The Creator must be capable of accepting either a network serialized transaction, or a PSBT.
+The Creator can either produce a new PSBT, or update the provided PSBT.
+For any scriptSigs which are non-final, the Creator will provide an empty scriptSig and input fields with information from the scriptSig, if any.
+If possible, the Creator should also look for any required redeemScripts and witnesScripts and add those to the global data section of the PSBT.
+The Creator should also provide any related UTXOs that it knows about.
+
+===Signer===
+
+The Signer must only accept a PSBT.
+The Signer must only use the UTXOs provided in the PSBT to produce signatures for inputs.
+The Signer should not need require any additional data sources, as all necessary information is provided in the PSBT format.
+Any signatures created by the Signer must be added as a "Partial Signature" key-value pair for the respective input it relates to.
+
+The Signer can additionally compute the addresses and values being sent, and the transaction fee, optionally showing this data to the user as a confirmation of intent and the consequences of signing the PSBT.
+
+===Combiner===
+
+The Combiner can accept 1 or many PSBTs.
+The Combiner must merge them into one PSBT (if possible), or fail.
+The resulting PSBT must contains all of the key-value pairs from each of the PSBTs.
+The Combined must remove any duplicate key-value pairs, in accordance with the specification.
+
+===Finalizer===
+
+The Finalizer must only accept a PSBT.
+The Finalizer transforms a PSBT into a network serialized transaction.
+
+For any inputs which are not complete, the Finalizer will emplace an empty scriptSig in the network serialized transaction.
+For any input which has a complete set of signatures, the Finalizer must attempt to build the complete scriptSig and encode it into the network serialized transaction.
+
+==Extensibility==
+
+The Partially Signed Transaction format can be extended in the future by adding
+new types for key-value pairs. Backwards compatibilty will still be maintained as those new
+types will be ignored and passed-through by signers which do not know about them.
+
+Additional key-value maps with different types for the key-value pairs can be added on to
+the end of the format. The number of each map that follows must be specified in the globals
+section so that parsers will know when to use different definitions of the data types.
+
+==Compatibility==
+
+This transaction format is designed so that it is unable to be properly unserialized
+by normal transaction unserializers. Likewise, a normal transaction will not be
+able to be unserialized by an unserializer for the PSBT format.
+
+==Examples==
+
+===Manual CoinJoin Workflow===
+
+<img src="bip-0174/coinjoin-workflow.png" align="middle"></img>
+
+===2-of-3 Multisig Workflow===
+
+<img src="bip-0174/multisig-workflow.png" align="middle"></img>
+
+==Test Vectors==
+
+The following test vectors are done with keys derived from the following master private key. Keypaths and individual private keys will be specified when necessary
+
+<pre>
+tprv8ZgxMBicQKsPdHrvvmuEXXZ7f5EheFqshqVmtPjeLLMjqwrWbSeuGDcgJU1icTHtLjYiGewa5zcMScbGSRR8AqB8A5wvB3XRdNYBDMhXpBS
+</pre>
+
+The following are invalid PSBTs:
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
+!Test Case
+!Explanation
+|-
+| <pre>0200000001268171371edff285e937adeea4b37b78000c0566cbb3ad64641713ca42171bf6000000006a473044022070b2245123e6bf474d60c5b50c043d4c691a5d2435f09a34a7662a9dc251790a022001329ca9dacf280bdf30740ec0390422422c81cb45839457aeb76fc12edd95b3012102657d118d3357b8e0f4c2cd46db7b39f6d9c38d9a70abcb9b2de5dc8dbfe4ce31feffffff02d3dff505000000001976a914d0c59903c5bac2868760e90fd521a4665aa7652088ac00e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc787b32e1300</pre>
+| Network transaction, not PSBT format
+|-
+| <pre>70736274ff0100750200000001268171371edff285e937adeea4b37b78000c0566cbb3ad64641713ca42171bf60000000000feffffff02d3dff505000000001976a914d0c59903c5bac2868760e90fd521a4665aa7652088ac00e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc787b32e1300000100fda5010100000000010289a3c71eab4d20e0371bbba4cc698fa295c9463afa2e397f8533ccb62f9567e50100000017160014be18d152a9b012039daf3da7de4f53349eecb985ffffffff86f8aa43a71dff1448893a530a7237ef6b4608bbb2dd2d0171e63aec6a4890b40100000017160014fe3e9ef1a745e974d902c4355943abcb34bd5353ffffffff0200c2eb0b000000001976a91485cff1097fd9e008bb34af709c62197b38978a4888ac72fef84e2c00000017a914339725ba21efd62ac753a9bcd067d6c7a6a39d05870247304402202712be22e0270f394f568311dc7ca9a68970b8025fdd3b240229f07f8a5f3a240220018b38d7dcd314e734c9276bd6fb40f673325bc4baa144c800d2f2f02db2765c012103d2e15674941bad4a996372cb87e1856d3652606d98562fe39c5e9e7e413f210502483045022100d12b852d85dcd961d2f5f4ab660654df6eedcc794c0c33ce5cc309ffb5fce58d022067338a8e0e1725c197fb1a88af59f51e44e4255b20167c8684031c05d1f2592a01210223b72beef0965d10be0778efecd61fcac6f79a4ea169393380734464f84f2ab300000000</pre>
+| PSBT missing null terminator
+|-
+| <pre>70736274ff0100fd0a010200000002ab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be4000000006a47304402204759661797c01b036b25928948686218347d89864b719e1f7fcf57d1e511658702205309eabf56aa4d8891ffd111fdf1336f3a29da866d7f8486d75546ceedaf93190121035cdc61fc7ba971c0b501a646a2a83b102cb43881217ca682dc86e2d73fa88292feffffffab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be40100000000feffffff02603bea0b000000001976a914768a40bbd740cbe81d988e71de2a4d5c71396b1d88ac8e240000000000001976a9146f4620b553fa095e721b9ee0efe9fa039cca459788ac0000000015013545e6e33b832c47050f24d3eeb93c9c03948bc716001485d13537f2e265405a34dbafa9e3dda01fb823080001012000e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc7870104010200</pre>
+| PSBT with one P2PKH input and one P2SH-P2WPKH input with only the first input signed, finalized, and skipped. Input index is specified but total input count is not given.
+|-
+| <pre>70736274ff0100fd0a010200000002ab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be4000000006a47304402204759661797c01b036b25928948686218347d89864b719e1f7fcf57d1e511658702205309eabf56aa4d8891ffd111fdf1336f3a29da866d7f8486d75546ceedaf93190121035cdc61fc7ba971c0b501a646a2a83b102cb43881217ca682dc86e2d73fa88292feffffffab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be40100000000feffffff02603bea0b000000001976a914768a40bbd740cbe81d988e71de2a4d5c71396b1d88ac8e240000000000001976a9146f4620b553fa095e721b9ee0efe9fa039cca459788ac0000000015013545e6e33b832c47050f24d3eeb93c9c03948bc716001485d13537f2e265405a34dbafa9e3dda01fb82308010401010001012000e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc78700</pre>
+| PSBT with one P2PKH input and one P2SH-P2WPKH input with only the first input signed, finalized, and skipped. Total input count is given but second input does not have its index.
+|}
+
+The following are valid PSBTs:
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
+!Test Case
+!Explanation
+|-
+| <pre>70736274ff0100750200000001268171371edff285e937adeea4b37b78000c0566cbb3ad64641713ca42171bf60000000000feffffff02d3dff505000000001976a914d0c59903c5bac2868760e90fd521a4665aa7652088ac00e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc787b32e1300000100fda5010100000000010289a3c71eab4d20e0371bbba4cc698fa295c9463afa2e397f8533ccb62f9567e50100000017160014be18d152a9b012039daf3da7de4f53349eecb985ffffffff86f8aa43a71dff1448893a530a7237ef6b4608bbb2dd2d0171e63aec6a4890b40100000017160014fe3e9ef1a745e974d902c4355943abcb34bd5353ffffffff0200c2eb0b000000001976a91485cff1097fd9e008bb34af709c62197b38978a4888ac72fef84e2c00000017a914339725ba21efd62ac753a9bcd067d6c7a6a39d05870247304402202712be22e0270f394f568311dc7ca9a68970b8025fdd3b240229f07f8a5f3a240220018b38d7dcd314e734c9276bd6fb40f673325bc4baa144c800d2f2f02db2765c012103d2e15674941bad4a996372cb87e1856d3652606d98562fe39c5e9e7e413f210502483045022100d12b852d85dcd961d2f5f4ab660654df6eedcc794c0c33ce5cc309ffb5fce58d022067338a8e0e1725c197fb1a88af59f51e44e4255b20167c8684031c05d1f2592a01210223b72beef0965d10be0778efecd61fcac6f79a4ea169393380734464f84f2ab30000000000</pre>
+| PSBT with one P2PKH input which has a non-final scriptSig.
+|-
+| <pre>70736274ff0100750200000001268171371edff285e937adeea4b37b78000c0566cbb3ad64641713ca42171bf60000000000feffffff02d3dff505000000001976a914d0c59903c5bac2868760e90fd521a4665aa7652088ac00e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc787b32e1300000100fda5010100000000010289a3c71eab4d20e0371bbba4cc698fa295c9463afa2e397f8533ccb62f9567e50100000017160014be18d152a9b012039daf3da7de4f53349eecb985ffffffff86f8aa43a71dff1448893a530a7237ef6b4608bbb2dd2d0171e63aec6a4890b40100000017160014fe3e9ef1a745e974d902c4355943abcb34bd5353ffffffff0200c2eb0b000000001976a91485cff1097fd9e008bb34af709c62197b38978a4888ac72fef84e2c00000017a914339725ba21efd62ac753a9bcd067d6c7a6a39d05870247304402202712be22e0270f394f568311dc7ca9a68970b8025fdd3b240229f07f8a5f3a240220018b38d7dcd314e734c9276bd6fb40f673325bc4baa144c800d2f2f02db2765c012103d2e15674941bad4a996372cb87e1856d3652606d98562fe39c5e9e7e413f210502483045022100d12b852d85dcd961d2f5f4ab660654df6eedcc794c0c33ce5cc309ffb5fce58d022067338a8e0e1725c197fb1a88af59f51e44e4255b20167c8684031c05d1f2592a01210223b72beef0965d10be0778efecd61fcac6f79a4ea169393380734464f84f2ab3000000000103040100000000</pre>
+| PSBT with one P2PKH input which has a non-final scriptSig and has a sighash type specified.
+|-
+| <pre>70736274ff0100a00200000002ab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be40000000000feffffffab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be40100000000feffffff02603bea0b000000001976a914768a40bbd740cbe81d988e71de2a4d5c71396b1d88ac8e240000000000001976a9146f4620b553fa095e721b9ee0efe9fa039cca459788ac0000000015013545e6e33b832c47050f24d3eeb93c9c03948bc716001485d13537f2e265405a34dbafa9e3dda01fb82308000100df0200000001268171371edff285e937adeea4b37b78000c0566cbb3ad64641713ca42171bf6000000006a473044022070b2245123e6bf474d60c5b50c043d4c691a5d2435f09a34a7662a9dc251790a022001329ca9dacf280bdf30740ec0390422422c81cb45839457aeb76fc12edd95b3012102657d118d3357b8e0f4c2cd46db7b39f6d9c38d9a70abcb9b2de5dc8dbfe4ce31feffffff02d3dff505000000001976a914d0c59903c5bac2868760e90fd521a4665aa7652088ac00e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc787b32e13000001012000e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc78700</pre>
+| PSBT with one P2PKH input and one P2SH-P2WPKH input both with non-final scriptSigs. P2SH-P2WPKH input's redeemScript is available.
+|-
+| <pre>70736274ff0100fd0a010200000002ab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be4000000006a47304402204759661797c01b036b25928948686218347d89864b719e1f7fcf57d1e511658702205309eabf56aa4d8891ffd111fdf1336f3a29da866d7f8486d75546ceedaf93190121035cdc61fc7ba971c0b501a646a2a83b102cb43881217ca682dc86e2d73fa88292feffffffab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be40100000000feffffff02603bea0b000000001976a914768a40bbd740cbe81d988e71de2a4d5c71396b1d88ac8e240000000000001976a9146f4620b553fa095e721b9ee0efe9fa039cca459788ac0000000015013545e6e33b832c47050f24d3eeb93c9c03948bc716001485d13537f2e265405a34dbafa9e3dda01fb82308000001012000e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc78700</pre>
+| PSBT with one P2PKH input and one P2SH-P2WPKH input with only the first input signed and finalized.
+|-
+| <pre>70736274ff0100fd0a010200000002ab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be4000000006a47304402204759661797c01b036b25928948686218347d89864b719e1f7fcf57d1e511658702205309eabf56aa4d8891ffd111fdf1336f3a29da866d7f8486d75546ceedaf93190121035cdc61fc7ba971c0b501a646a2a83b102cb43881217ca682dc86e2d73fa88292feffffffab0949a08c5af7c49b8212f417e2f15ab3f5c33dcf153821a8139f877a5b7be40100000000feffffff02603bea0b000000001976a914768a40bbd740cbe81d988e71de2a4d5c71396b1d88ac8e240000000000001976a9146f4620b553fa095e721b9ee0efe9fa039cca459788ac0000000015013545e6e33b832c47050f24d3eeb93c9c03948bc716001485d13537f2e265405a34dbafa9e3dda01fb82308010401010001012000e1f5050000000017a9143545e6e33b832c47050f24d3eeb93c9c03948bc7870104010100</pre>
+| PSBT with one P2PKH input and one P2SH-P2WPKH input with only the first input signed, finalized, and skipped. Input indexes are used.
+|-
+| <pre>70736274ff0100550200000001279a2323a5dfb51fc45f220fa58b0fc13e1e3342792a85d7e36cd6333b5cbc390000000000ffffffff01a05aea0b000000001976a914ffe9c0061097cc3b636f2cb0460fa4fc427d2b4588ac0000000015016345200f68d189e1adc0df1c4d16ea8f14c0dbeb220020771fd18ad459666dd49f3d564e3dbc42f4c84774e360ada16816a8ed488d56812102771fd18ad459666dd49f3d564e3dbc42f4c84774e360ada16816a8ed488d568147522103b1341ccba7683b6af4f1238cd6e97e7167d569fac47f1e48d47541844355bd462103de55d1e1dac805e3f8a58c1fbf9b94c02f3dbaafe127fefca4995f26f82083bd52ae220303b1341ccba7683b6af4f1238cd6e97e7167d569fac47f1e48d47541844355bd4610b4a6ba67000000800000008004000080220303de55d1e1dac805e3f8a58c1fbf9b94c02f3dbaafe127fefca4995f26f82083bd10b4a6ba67000000800000008005000080000100fd51010200000002f1d8d4b1acab9217bcbd0a09e37876efd79cf753baa2b2362e7d429c0deafbf5000000006a47304402202f29ddfff387626cf43fcae483456fb9d12d7f50fb10b39c245bab238d960d6502200f32fa3197dc6aa1fc870e33d8c590378862ce0b9bf6be865d5aac0a7390ae3a012102ead596687ca806043edc3de116cdf29d5e9257c196cd055cf698c8d02bf24e99fefffffff1d8d4b1acab9217bcbd0a09e37876efd79cf753baa2b2362e7d429c0deafbf5010000006b483045022100dc3bc94086fd7d48102a8290c737e27841bc1ce587fd4d9efe96a37d88c03a6502206dea717b8225b4ae9e1624bfc02927edac222ee094bf009996d9d0305d7645f501210394f62be9df19952c5587768aeb7698061ad2c4a25c894f47d8c162b4d7213d05feffffff01955eea0b0000000017a9146345200f68d189e1adc0df1c4d16ea8f14c0dbeb87fb2e1300220203b1341ccba7683b6af4f1238cd6e97e7167d569fac47f1e48d47541844355bd4646304302200424b58effaaa694e1559ea5c93bbfd4a89064224055cdf070b6771469442d07021f5c8eb0fea6516d60b8acb33ad64ede60e8785bfb3aa94b99bdf86151db9a9a0100</pre>
+| PSBT with one P2SH-P2WSH input of a 2-of-2 multisig, redeemScript, witnessScript, and keypaths are available. Contains one signature.
+|}
+
+A creator with only the following:
+
+* Redeem Scripts:
+** <tt>522103c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c02103c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977352ae</tt>
+** <tt>0020a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df</tt>
+* Witness Scripts:
+** <tt>522102e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87210258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e52ae</tt>
+* UTXOs
+** TXID: <tt>0a4381c05136c0cb44886a5df7c26f1930bcc2c12e00ec60e027c4378d7d8c2e</tt>, Index: <tt>1</tt>
+*** scriptPubKey: <tt>a914203736c3c06053896d7041ce8f5bae3df76cc49187</tt>
+*** value: 0.50000000
+** TXID: <tt>2c4df245d00b491bdf24965adbbccdaa7f62ccac933d3e9377f336c60c4ea096</tt>, Index: <tt>0</tt>
+*** scriptPubKey: <tt>a914f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d587</tt>
+*** value: 2.00000000
+
+given this unsigned transaction:
+<pre>02000000022e8c7d8d37c427e060ec002ec1c2bc30196fc2f75d6a8844cbc03651c081430a0100000000ffffffff96a04e0cc636f377933e3d93accc627faacdbcdb5a9624df1b490bd045f24d2c0000000000ffffffff01e02be50e0000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d8700000000</pre>
+must create this PSBT:
+<pre>70736274ff01007c02000000022e8c7d8d37c427e060ec002ec1c2bc30196fc2f75d6a8844cbc03651c081430a0100000000ffffffff96a04e0cc636f377933e3d93accc627faacdbcdb5a9624df1b490bd045f24d2c0000000000ffffffff01e02be50e0000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d87000000001501203736c3c06053896d7041ce8f5bae3df76cc49147522103c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c02103c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977352ae1501f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d5220020a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df2102a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df47522102e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87210258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e52ae000100fdff0002000000018b2dd2f735d0a9338af96402a8a91e4841cd3fed882362e7329fb04f1ff65325000000006a473044022077bedfea9910c9ba4e00dec941dace974f8b47349992c5d4312c1cf5796cce5502206164e6bfff7ac11590064ca571583709337c8a38973db2e70f4e9d93b3bcce1d0121032d64447459784e37cb2dda366c697adbbdc8aae2ad6db74ed2dade39d75882fafeffffff0382b42a04000000001976a914da533648fd339d5797790e6bb1667d9e86fdfb6888ac80f0fa020000000017a914203736c3c06053896d7041ce8f5bae3df76cc4918700b4c4040000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d879e2f13000001012000c2eb0b0000000017a914f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d58700</pre>
+
+Given the above PSBT, a signer with the following keys:
+* <tt>cQxozhqme9dcDbxT97uDu1P32Cnywc5nAMhPtQwyWhVgQnP43WGH</tt>
+* <tt>cP3ArXq5BpHE94R4buJ5uma4pyKvaWXUd5Bpsy3hS2zA69X9KMnM</tt>
+must create this PSBT:
+<pre>70736274ff01007c02000000022e8c7d8d37c427e060ec002ec1c2bc30196fc2f75d6a8844cbc03651c081430a0100000000ffffffff96a04e0cc636f377933e3d93accc627faacdbcdb5a9624df1b490bd045f24d2c0000000000ffffffff01e02be50e0000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d87000000001501203736c3c06053896d7041ce8f5bae3df76cc49147522103c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c02103c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977352ae1501f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d5220020a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df2102a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df47522102e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87210258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e52ae000100fdff0002000000018b2dd2f735d0a9338af96402a8a91e4841cd3fed882362e7329fb04f1ff65325000000006a473044022077bedfea9910c9ba4e00dec941dace974f8b47349992c5d4312c1cf5796cce5502206164e6bfff7ac11590064ca571583709337c8a38973db2e70f4e9d93b3bcce1d0121032d64447459784e37cb2dda366c697adbbdc8aae2ad6db74ed2dade39d75882fafeffffff0382b42a04000000001976a914da533648fd339d5797790e6bb1667d9e86fdfb6888ac80f0fa020000000017a914203736c3c06053896d7041ce8f5bae3df76cc4918700b4c4040000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d879e2f1300220203c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977347304402202a690a7a8d5763839df48285dea09f8ca69accd0227db9b735858eb87512a35b02204d294da3240bb1b069b728ddd5ce77dab61a9edf8db996268775a79a62817286010001012000c2eb0b0000000017a914f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d587220202e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87483045022100f75f171e172383026972f8ed9986dba1db1f4fd12c9530b27d216b0b9fea60ac02206b288ffdeb2c6aa5e6c24aea4294e91c384249b04b29977dff7d5d53d8db71520100</pre>
+
+Given the above blank PSBT, a signer with the following keys:
+* <tt>cUL8UxFiJjnLkkZwmmXDxmaNRQfEMDP44eZnSaiYR3KUJNv82chM</tt>
+* <tt>cNQm3eSF9rQnpoUB8xThUVDfaeRVEckPK11rGB6LjweFdhwcCS4A</tt>
+must create this PSBT:
+<pre>70736274ff01007c02000000022e8c7d8d37c427e060ec002ec1c2bc30196fc2f75d6a8844cbc03651c081430a0100000000ffffffff96a04e0cc636f377933e3d93accc627faacdbcdb5a9624df1b490bd045f24d2c0000000000ffffffff01e02be50e0000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d87000000001501203736c3c06053896d7041ce8f5bae3df76cc49147522103c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c02103c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977352ae1501f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d5220020a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df2102a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df47522102e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87210258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e52ae000100fdff0002000000018b2dd2f735d0a9338af96402a8a91e4841cd3fed882362e7329fb04f1ff65325000000006a473044022077bedfea9910c9ba4e00dec941dace974f8b47349992c5d4312c1cf5796cce5502206164e6bfff7ac11590064ca571583709337c8a38973db2e70f4e9d93b3bcce1d0121032d64447459784e37cb2dda366c697adbbdc8aae2ad6db74ed2dade39d75882fafeffffff0382b42a04000000001976a914da533648fd339d5797790e6bb1667d9e86fdfb6888ac80f0fa020000000017a914203736c3c06053896d7041ce8f5bae3df76cc4918700b4c4040000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d879e2f1300220203c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c047304402204a33aa884465a7d909000c366afb90c9256b66575f0c7e5f12446a16d8cc1a4d02203fa9fc43d50168f000b280be6b3db916cf9e483de8e6d9eac948b0d08f7601df010001012000c2eb0b0000000017a914f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d58722020258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e483045022100cdac5ee547b60f79feec111d0e082c3350b30a087c130d5e734e0199b3f8c14702205deddd38d8f7ddb19931059f46b2de0c8548fe79f8c8aea34c5e653ea0136b950100</pre>
+
+Given both of the above PSBTs, a combiner must create this PSBT:
+<pre>70736274ff01007c02000000022e8c7d8d37c427e060ec002ec1c2bc30196fc2f75d6a8844cbc03651c081430a0100000000ffffffff96a04e0cc636f377933e3d93accc627faacdbcdb5a9624df1b490bd045f24d2c0000000000ffffffff01e02be50e0000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d87000000001501203736c3c06053896d7041ce8f5bae3df76cc49147522103c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c02103c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977352ae1501f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d5220020a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df2102a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590df47522102e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87210258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e52ae000100fdff0002000000018b2dd2f735d0a9338af96402a8a91e4841cd3fed882362e7329fb04f1ff65325000000006a473044022077bedfea9910c9ba4e00dec941dace974f8b47349992c5d4312c1cf5796cce5502206164e6bfff7ac11590064ca571583709337c8a38973db2e70f4e9d93b3bcce1d0121032d64447459784e37cb2dda366c697adbbdc8aae2ad6db74ed2dade39d75882fafeffffff0382b42a04000000001976a914da533648fd339d5797790e6bb1667d9e86fdfb6888ac80f0fa020000000017a914203736c3c06053896d7041ce8f5bae3df76cc4918700b4c4040000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d879e2f1300220203c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977347304402202a690a7a8d5763839df48285dea09f8ca69accd0227db9b735858eb87512a35b02204d294da3240bb1b069b728ddd5ce77dab61a9edf8db996268775a79a6281728601220203c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c047304402204a33aa884465a7d909000c366afb90c9256b66575f0c7e5f12446a16d8cc1a4d02203fa9fc43d50168f000b280be6b3db916cf9e483de8e6d9eac948b0d08f7601df010001012000c2eb0b0000000017a914f3ba8a120d960ae07d1dbe6f0c37fb4c926d76d58722020258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e483045022100cdac5ee547b60f79feec111d0e082c3350b30a087c130d5e734e0199b3f8c14702205deddd38d8f7ddb19931059f46b2de0c8548fe79f8c8aea34c5e653ea0136b9501220202e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87483045022100f75f171e172383026972f8ed9986dba1db1f4fd12c9530b27d216b0b9fea60ac02206b288ffdeb2c6aa5e6c24aea4294e91c384249b04b29977dff7d5d53d8db71520100</pre>
+
+Given the above PSBT, a finalizer must create this complete Bitcoin transaction:
+<pre>020000000001022e8c7d8d37c427e060ec002ec1c2bc30196fc2f75d6a8844cbc03651c081430a01000000d90047304402204a33aa884465a7d909000c366afb90c9256b66575f0c7e5f12446a16d8cc1a4d02203fa9fc43d50168f000b280be6b3db916cf9e483de8e6d9eac948b0d08f7601df0147304402202a690a7a8d5763839df48285dea09f8ca69accd0227db9b735858eb87512a35b02204d294da3240bb1b069b728ddd5ce77dab61a9edf8db996268775a79a628172860147522103c8727ce35f1c93eb0be21406ee9a923c89219fe9c9e8504c8314a6a22d1295c02103c74dc710c407d7db6e041ee212d985cd2826d93f806ed44912b9a1da691c977352aeffffffff96a04e0cc636f377933e3d93accc627faacdbcdb5a9624df1b490bd045f24d2c0000000023220020a8f44467bf171d51499153e01c0bd6291109fc38bd21b3c3224c9dc6b57590dfffffffff01e02be50e0000000017a914b53bb0dc1db8c8d803e3e39f784d42e4737ffa0d87000400483045022100f75f171e172383026972f8ed9986dba1db1f4fd12c9530b27d216b0b9fea60ac02206b288ffdeb2c6aa5e6c24aea4294e91c384249b04b29977dff7d5d53d8db715201483045022100cdac5ee547b60f79feec111d0e082c3350b30a087c130d5e734e0199b3f8c14702205deddd38d8f7ddb19931059f46b2de0c8548fe79f8c8aea34c5e653ea0136b950147522102e80dec31d167865c1685e9d7a9291e66a4ea22c65cfee324289a1667ccda3b87210258cbbc3cb295a8bebac233aadc7773978804993798be5390ab444f6dd4c5327e52ae00000000</pre>
+
+==Rationale==
+
+<references/>
+
+==Reference implementation==
+
+The reference implementation of the PSBT format is available at https://github.com/achow101/bitcoin/tree/psbt.
+
+==Acknowledgements==
+
+Special thanks to Pieter Wuille for suggesting that such a transaction format should be made
+and for coming up with the name and abbreviation of PSBT.
+
+Thanks to Pieter Wuille, Gregory Maxwell, Jonathan Underwood, and Daniel Cousens for additional comments
+and suggestions for improving this proposal.
+
+==Appendix A: Data types and their specifications==
+
+Any data types, their associated scope and BIP number must be defined here
+
+{| class="wikitable" style="width: auto; text-align: center; font-size: smaller; table-layout: fixed;"
+!Scope
+!Type values
+!BIP Number
+|-
+| Global
+| 0,1,2,3,4
+| BIP 174
+|-
+| Input
+| 0,1,2,3,4
+| BIP 174
+|}
diff --git a/bip-0174/coinjoin-workflow.png b/bip-0174/coinjoin-workflow.png
new file mode 100644
index 0000000..0909c1d
--- /dev/null
+++ b/bip-0174/coinjoin-workflow.png
Binary files differ
diff --git a/bip-0174/multisig-workflow.png b/bip-0174/multisig-workflow.png
new file mode 100644
index 0000000..0e752c5
--- /dev/null
+++ b/bip-0174/multisig-workflow.png
Binary files differ
diff --git a/bip-0175.mediawiki b/bip-0175.mediawiki
new file mode 100644
index 0000000..a3ffd1c
--- /dev/null
+++ b/bip-0175.mediawiki
@@ -0,0 +1,259 @@
+<pre>
+ BIP: 175
+ Layer: Applications
+ Title: Pay to Contract Protocol
+ Author: Omar Shibli <omar@commerceblock.com>
+ Nicholas Gregory <nicholas@commerceblock.com>
+ Comments-Summary: No comments yet.
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0175
+ Status: Draft
+ Type: Informational
+ Created: 2017-07-17
+ License: BSD-2-Clause
+</pre>
+
+==Abstract==
+
+Utilizing hierarchical deterministic wallets as described in BIP-0032 and the "Purpose Field" in BIP-0043, this document specifies the multiparty pay-to-contract key derivation scheme outlined by Ilja Gerhardt and Timo Hanke.[0]
+
+The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
+
+==Motivation==
+
+A Bitcoin transaction represents a "real world" contract between two parties transferring value. Counterparties in a business interaction traditionally keep track of a payment with bills (invoices) and receipts. Delivery of a good is made by the payee once the payer has signed the receipt, agreeing to pay for the items on the invoice. Gerhardt and Hanke [0] formulate this interaction within the confines of the Bitcoin protocol using homomorphic payment addresses and the multiparty pay-to-contract protocol.
+
+The protocol is constructed in such a way that all parties have cryptographic proof of both who is being paid and for what. Using the technique described in this BIP, an address can be provably derived from the terms of a contract and the payee's public key. This derivation scheme does not bloat the UTXO and is completely hidden to network participants; the derived address looks like any other P2(W)PKH or P2(W)SH address. Redemption of the funds requires knowledge of the contract and the payee's private key.
+
+This scheme utilizes the foundations of BIP-0032, providing a consistent way for preexisting wallet developers to implement the specification.
+
+==Specification==
+
+This key derivation scheme requires two parties: a payer (customer) and a payee (merchant).
+The customer submits to the merchant a purchase request, specifying what goods/services they would like to buy. From the purchase request the merchant constructs an invoice (contract), specifying the billable items and total amount to be paid.
+The merchant must give this contract alongside a “payment base” extended public key to the customer. Given this information, the customer will be able to fulfill the contract by generating the public key of the payment address associated with the contract and the payment base, then sending the funds there.
+
+We define the following levels in BIP32 path:
+
+<code>
+m / purpose' / coin_type' / contract_hash
+</code>
+
+<code>contract_hash</code> consists of multiple levels.
+
+Apostrophe in the path indicates that BIP32 hardened derivation is used.
+
+We define the following extended public keys:
+
+Payment base denoted as <code>payment_base</code>:
+
+ m / purpose' / coin_type'
+
+Payment address denoted as <code>payment_address</code>:
+
+ m / purpose' / coin_type' / contract_hash
+ or
+ m / payment_base / contract_hash
+
+Each level has special meaning described in the chapters below.
+
+===Purpose===
+
+Purpose is a constant set to <code>175'</code> (or <code>0x800000AF</code>) following the BIP-0043 recommendation. It indicates that the subtree of this node is used according to this specification.
+
+<code>
+m / 175' / *
+</code>
+
+Hardened derivation is used at this level.
+
+===Coin type===
+
+The coin type field is identical to the same field in BIP-0044.
+
+Hardened derivation is used at this level.
+
+===Payment address generation===
+
+For a given contract documents denoted by c<sub>1</sub>,...,c<sub>n</sub>, payment base extended public key denoted by <code>payment_base</code>, and cryptographic hash function denoted by <code>h</code>.
+
+1. Compute cryptographic hashes for all contract documents, by applying the hash function.
+
+ h(c1),...,h(cn)
+
+2. Sort all hashes lexicographically.
+
+ hash_1,...,hash_n
+
+3. Prepend payment_base and concatenate the sorted hashes and apply the hash function.
+
+ h(payment_base+hash_1+...+hash_n)
+
+4. Compute a partial BIP32 derivation path from the combined hash as defined in Hash to Partial Derivation Path Mapping procedure below.
+
+ contract_hash
+
+5. Prepend <code>payment_base</code> to contract_hash derivation path.
+
+ payment_base / contract_hash
+
+6. Compute public extended key from the derivation path in step 5.
+
+7. Compute address of the public extended key (P2PKH) from step 6.
+
+===Payment address verification===
+
+For a given Bitcoin address, <code>payment_base</code> extended public key, contract documents denoted by c<sub>1</sub>,...,c<sub>n</sub>, and cryptographic hash function denoted by <code>h</code>, we can verify the integrity of the address by the following steps:
+
+1. Compute contract address from the given inputs as described in Contract Address Generation section.
+
+2. Compare the computed address from step 1 with the given Bitcoin address as an input.
+
+===Redemption===
+
+The merchant is able to construct the private key offline using the method described in the Payment Address Generation section.
+The merchant should actively monitor the blockchain for the payment to the payment address.
+Because the address is generated from the payment base and the contract, the merchant must implicitly agree to those terms in order to spend the funds.
+The act of making the payment to that address thus serves as a receipt for the customer.
+
+===Hash to partial derivation path mapping===
+
+At this section, we define hash to partial BIP32 derivation path mapping procedure that maps between an arbitrary hex number to a partial BIP32 derivation path.
+
+For a given hex number, do the following:
+
+1. Partition hex number into parts, each part length is 4 chars.
+
+2. Convert each part to integer in decimal format.
+
+3. Concatenate all numbers with slash <code>/</code>.
+
+==Examples==
+
+For the following given inputs:
+
+ master private extended key:
+ xprv9s21ZrQH143K2JF8RafpqtKiTbsbaxEeUaMnNHsm5o6wCW3z8ySyH4UxFVSfZ8n7ESu7fgir8imbZKLYVBxFPND1pniTZ81vKfd45EHKX73
+ coin type:
+ 0
+
+we can compute payment base as follows:
+
+ payment base derivation path:
+ m/175'/0'
+ contract base public extended key:
+ xpub6B3JSEWjqm5GgfzcjPwBixxLPzi15pFM3jq4E4yCzXXUFS5MFdXiSdw7b5dbdPGHuc7c1V4zXbbFRtc9G1njMUt9ZvMdGVGYQSQsurD6HAW
+
+In the below examples, we are going to use SHA256 as a cryptographic hash function, and the above contract base public key.
+
+====Payment address generation====
+
+As an input, we have a contract that consists of two documents, below are contents:
+
+document 1:
+
+ bar
+
+document 2:
+
+ foo
+
+1. Apply the hash function:
+
+ document 1:
+ fcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9
+ document 2:
+ 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae
+
+2. Sort all hashes lexicographically:
+
+ 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae
+ fcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9
+
+3. Concatenate hashes and apply the hash function.
+
+ concatenated hash: payment_base
+ xpub6B3JSEWjqm5GgfzcjPwBixxLPzi15pFM3jq4E4yCzXXUFS5MFdXiSdw7b5dbdPGHuc7c1V4zXbbFRtc9G1njMUt9ZvMdGVGYQSQsurD6HAW2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7aefcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9
+ combined hash:
+ 310057788c6073640dc222466d003411cd5c1cc0bf2803fc6ebbfae03ceb4451
+
+4. Compute the partial BIP32 derivation path of the combined hash.
+
+ 12544/22392/35936/29540/3522/8774/27904/13329/52572/7360/48936/1020/28347/64224/15595/17489
+
+5. Prepend <code>payment_base</code> to <code>contract_hash</code> derivation path.
+
+ contract_base_pub/12544/22392/35936/29540/3522/8774/27904/13329/52572/7360/48936/1020/28347/64224/15595/17489
+ or
+ m/175'/0'/12544/22392/35936/29540/3522/8774/27904/13329/52572/7360/48936/1020/28347/64224/15595/17489
+
+6. Compute public extended key.
+
+ xpub6hefaATTG5LbcwyPDvmNfnkyzefoM2TJDoo5astH7Gvs1g8vZURviBWvAvBnWc2CNb8ybJ6mDpnQYVsvNSZ3oUmbssX3rUVG97TFYa6AXVk
+
+7. Compute address of the public extended key (P2PKH).
+
+ 1C7f322izqMqLzZzfzkPAjxBzprxDi47Yf
+
+
+====Verification example (negative test)====
+
+Similar to the input above, except this time we have a contract that consists of one document, below is the content:
+
+document 1:
+
+ baz
+
+1. Apply the hash function.
+
+ baa5a0964d3320fbc0c6a922140453c8513ea24ab8fd0577034804a967248096
+
+2. Prepend payment_base
+
+ xpub6B3JSEWjqm5GgfzcjPwBixxLPzi15pFM3jq4E4yCzXXUFS5MFdXiSdw7b5dbdPGHuc7c1V4zXbbFRtc9G1njMUt9ZvMdGVGYQSQsurD6HAWbaa5a0964d3320fbc0c6a922140453c8513ea24ab8fd0577034804a967248096
+
+2. Apply hash function
+
+ 3a08605829413ce0bf551b08d21e4a28dbda6e407f90eff1c448e839050c73a1
+
+3. Compute the partial derivation path.
+
+ 5338/54412/19213/962/30664/62597/11873/59874/56779/24089/54550/19585/28087/36422/18666/17562
+
+4. Prepend contract_base<sub>pub</sub> to contract_hash derivation path.
+
+ contract_base_pub/5338/54412/19213/962/30664/62597/11873/59874/56779/24089/54550/19585/28087/36422/18666/17562
+ or
+ m/175'/0'/5338/54412/19213/962/30664/62597/11873/59874/56779/24089/54550/19585/28087/36422/18666/17562
+
+5. Compute public extended key.
+
+ xpub6h9k2KqsMpwghxt7naj1puhGV1ZDC88sxvpYN1HibCf8yQZdPsuhYmmvdK32Kf2Lb3rS1sV8UcZ1f84DJEiXuVfLCAj4bC85aEUCxh38m8i
+
+7. Compute address of the public extended key (P2PKH).
+
+ 1QGe5LaDMAmHeibJbZBmZqhQDZSp7QCqSs
+
+8. As expected the address doesn't match the Bitcoin address from the last example <code>1C7f322izqMqLzZzfzkPAjxBzprxDi47Yf</code>.
+
+Verification operation will succeed only if we use identical documents to ones that have been used in the contract address generation.
+
+==Compatibility==
+
+This specification is not backward compatible with BIP32 specification, the proposed derivation scheme in this BIP is a BIP32 compliant.
+Communication between payer and payee as well as hashing the contract and generating the path requires significant modification to the wallet.
+
+==Reference implementations==
+
+* Reference wallet implementation, based on Copay project : https://github.com/commerceblock/copay ([[https://github.com/commerceblock/copay/pull/1|pull_request]])
+* Reference implementation to Hash to Partial Derivation Path Mapping in javascript ([[https://github.com/commerceblock/pay-to-contract-lib/blob/master/lib/contract.js|https://github.com/commerceblock/pay-to-contract-lib]])
+
+==Reference==
+
+* [[bip-0032.mediawiki|BIP32 - Hierarchical Deterministic Wallets]]
+* [[bip-0043.mediawiki|BIP43 - Purpose Field for Deterministic Wallets]]
+* [[bip-0044.mediawiki|BIP44 - Multi-Account Hierarchy for Deterministic Wallets]]
+* [[https://arxiv.org/abs/1212.3257|Homomorphic Payment Addresses and the Pay-to-Contract Protocol]]
+
+==Copyright==
+
+This BIP is licensed under the 2-clause BSD license.
diff --git a/bip-0176.mediawiki b/bip-0176.mediawiki
new file mode 100644
index 0000000..8a49bfa
--- /dev/null
+++ b/bip-0176.mediawiki
@@ -0,0 +1,57 @@
+<pre>
+ BIP: 176
+ Title: Bits Denomination
+ Author: Jimmy Song <jaejoon@gmail.com>
+ Comments-Summary: No comments yet.
+ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0176
+ Status: Draft
+ Type: Informational
+ Created: 2017-12-12
+ License: BSD-2-Clause
+</pre>
+
+== Abstract ==
+Bits is presented here as the standard term for 100 (one hundred) satoshis or 1/1,000,000 (one one-millionth) of a bitcoin.
+
+== Motivation ==
+The bitcoin price has grown over the years and once the price is past $10,000 USD or so, bitcoin amounts under $10 USD start having enough decimal places that it's difficult to tell whether the user is off by a factor of 10 or not. Switching the denomination to "bits" makes comprehension easier. For example, when BTC is $15,000 USD, $10.05 is a somewhat confusing 0.00067 BTC, versus 670 bits, which is a lot clearer.
+
+Additonally, reverse comparisons are easier as 59 bits being $1 is easier to comprehend for most people than 0.000059 BTC being $1. Similar comparisons can be made to other currencies: 1 yen being 0.8 bits, 1 won being 0.07 bits and so on.
+
+Potential benefits of utilizing "bits" include:
+
+# Reduce user error on small bitcoin amounts.
+# Reduce unit bias for users that want a "whole" bitcoin.
+# Allow easier comparisons of prices for most users.
+# Allow easier bi-directional comparisons to fiat currencies.
+# Allows all UTXO amounts to need at most 2 decimal places, which can be easier to handle.
+
+== Specification ==
+Definition: 1 bit = 100 satoshis.
+Plural of "bit" is "bits". The terms "bit" and "bits" are not proper nouns and thus should not be capitalized unless used at the start of a sentence, etc.
+
+All bitcoin-denominated items are encouraged to also show the denomination in bits, either as the default or as an option.
+
+== Rationale ==
+As bitcoin grows in price versus fiat currencies, it's important to give users the ability to quickly and accurately calculate prices for transactions, savings and other economic activities. "Bits" have been used as a denomination within the Bitcoin ecosystem for some time. The idea of this BIP is to formalize this name. Additionally, "bits" is likely the only other denomination that will be needed for Bitcoin as 0.01 bit = 1 satoshi, meaning that two decimal places will be sufficient to describe any current utxo.
+
+Existing terms used in bitcoin such as satoshi, milli-bitcoin (mBTC) and bitcoin (BTC) do not conflict as they operate at different orders of magnitude.
+
+The term micro-bitcoin (µBTC) can continue to exist in tandem with the term "bits".
+
+== Backwards Compatibility ==
+Software such as the Bitcoin Core GUI currently use the µBTC denomination and can continue to do so. There is no obligation to switch to "bits".
+
+The term "bit" has many different definitions, but the ones of particular note are these:
+
+* 1 bit = 1/8 dollar (e.g. That candy cost me 2 bits)
+* bit meaning some amount of data (e.g. The first bit of the version field is 0)
+* bit meaning strength of a cryptographic algorithm (e.g. 256-bit ECDSA is used in Bitcoin)
+
+The first is a bit dated and isn't likely to confuse people dealing with Bitcoin. The second and third are computer science terms and context should be sufficient to figure out what the user of the word means.
+
+== Copyright ==
+This BIP is licensed under the BSD 2-clause license.
+
+== Credit ==
+It's hard to ascertain exactly who invented the term "bits", but the term has been around for a while and the author of this BIP does not take any credit for inventing the term. \ No newline at end of file
diff --git a/scripts/buildtable.pl b/scripts/buildtable.pl
index 36701a5..144ede6 100755
--- a/scripts/buildtable.pl
+++ b/scripts/buildtable.pl
@@ -19,6 +19,7 @@ my %MayHaveMulti = (
Author => undef,
'Comments-URI' => undef,
License => undef,
+ 'License-Code' => undef,
'Post-History' => undef,
);
my %DateField = (
@@ -149,9 +150,9 @@ while (++$bipnum <= $topbip) {
} elsif ($field eq 'Layer') { # BIP 123
die "Invalid layer $val in $fn" unless exists $ValidLayer{$val};
$layer = $val;
- } elsif ($field eq 'License') {
+ } elsif ($field =~ /^License(?:\-Code)?$/) {
die "Undefined license $val in $fn" unless exists $DefinedLicenses{$val};
- if (not $found{License}) {
+ if (not $found{$field}) {
die "Unacceptable license $val in $fn" unless exists $AcceptableLicenses{$val} or ($val eq 'PD' and exists $GrandfatheredPD{$bipnum});
}
} elsif ($field eq 'Comments-URI') {