summaryrefslogtreecommitdiff
path: root/bip-0324.mediawiki
diff options
context:
space:
mode:
authordhruv <856960+dhruv@users.noreply.github.com>2023-02-28 09:53:19 -0800
committerdhruv <856960+dhruv@users.noreply.github.com>2023-02-28 09:53:19 -0800
commitee8a4a3bc5ac4b7cc8c5a6610bf30b42ece2ad20 (patch)
tree09bb76d27c32cefe55096b82934580a9b2dab0c5 /bip-0324.mediawiki
parentaa5b0a1009bcb90315ee6bdacc2592749d6d4636 (diff)
Updates to BIP324 since January 11 2023
Diffstat (limited to 'bip-0324.mediawiki')
-rw-r--r--bip-0324.mediawiki89
1 files changed, 39 insertions, 50 deletions
diff --git a/bip-0324.mediawiki b/bip-0324.mediawiki
index 47032dd..c0572a3 100644
--- a/bip-0324.mediawiki
+++ b/bip-0324.mediawiki
@@ -31,7 +31,7 @@ Bitcoin is a permissionless network whose purpose is to reach consensus over pub
This proposal for a new P2P protocol version (v2) aims to improve upon this by raising the costs for performing these attacks substantially, primarily through the use of unauthenticated, opportunistic transport encryption. In addition, the bytestream on the wire is made pseudorandom (i.e., indistinguishable from uniformly random bytes) to a passive eavesdropper.
-* Encryption, even when it is unauthenticated and only used when both endpoints support v2, impedes eavesdropping by forcing the attacker to become active: either by performing a persistent man-in-the-middle (MitM) attack, by downgrading connections to v1, or by spinning up their own nodes and getting honest nodes to make connections to them. Active attacks at scale are more resource intensive in general, but in case of manual, deliberate connections (as opposed to automatic, random ones) they are also in principle detectable: even very basic checks, e.g., operators manually comparing protocol versions and session IDs (as supported by the proposed protocol), will expose the attacker.
+* Encryption, even when it is unauthenticated and only used when both endpoints support v2, impedes eavesdropping by forcing the attacker to become active: either by performing a persistent man-in-the-middle (MitM) attack, by downgrading connections to v1, or by spinning up their own nodes and getting honest nodes to make connections to them. Active attacks at scale are more resource intensive in general, but in the case of manual, deliberate connections (as opposed to automatic, random ones), they are also in principle detectable: even very basic checks, e.g., operators manually comparing protocol versions and session IDs (as supported by the proposed protocol), will expose the attacker.
* Tampering, while already an inherently active attack, is costlier if the attacker is forced to maintain the state necessary for a full MitM interception.
* A pseudorandom bytestream excludes identification techniques based on pattern matching, and makes it easier to shape the bytestream in order to mimic other protocols used on the Internet. This raises the cost of a connection censoring firewall, forcing them to either resort to a full MitM attack, or operate on a more obvious allowlist basis, rather than a blocklist basis.
@@ -39,7 +39,7 @@ This proposal for a new P2P protocol version (v2) aims to improve upon this by r
As we have argued above, unauthenticated encryption<ref name="what_does_auth_mean">'''What does ''authentication'' mean in this context?''' Unfortunately, the term authentication in the context of secure channel protocols is ambiguous. It can refer to:
* The encryption scheme guaranteeing that a message obtained via successful decryption was encrypted by someone having access to the (symmetric) encryption key, and not modified after encryption by a third party. The proposal in this document achieves that property through the use of an AEAD.
-* The communication protocol establishing that the communication partner's identity matches who we expect them to be, through some public key mechanism. The proposal in this document does '''not''' include such a mechanism.</ref> provides strictly better security than no encryption. Thus all connections should use encryption, even if they are unauthenticated.
+* The communication protocol establishing that the communication partner's identity matches who we expect them to be, through some public key mechanism. The proposal in this document does '''not''' include such a mechanism.</ref> provides strictly better security than no encryption. Thus, all connections should use encryption, even if they are unauthenticated.
When it comes to authentication, the situation is not as clear as for encryption. Due to Bitcoin's permissionless nature, authentication will always be restricted to specific scenarios (e.g., connections between peers belonging to the same operator), and whether some form of (possibly partially anonymous) authentication is desired depends on the specific requirements of the involved peers. As a consequence, we believe that authentication should be addressed separately (if desired), and this proposal aims to provide a solid technical basis for future protocol upgrades, including the addition of optional authentication (see [https://github.com/sipa/writeups/tree/main/private-authentication-protocols Private authentication protocols]).
@@ -51,9 +51,9 @@ A pseudorandom bytestream is not self-identifying. Moreover, it is unopinionated
''' Why not use a secure tunnel protocol? '''
-Our goal includes making opportunistic encryption ubiquitously available, as that provides the best defense against large-scale attacks. That implies protecting both the manual, deliberate connections node operators instruct their software to make, as well as the the automatic connections Bitcoin nodes make with each other based on IP addresses obtained via gossip. While encryption per se is already possible with proxy networks or VPN networks, these are not desirable or applicable for automatic connections at scale:
-* Proxy networks like Tor or I2P introduce a separate address space, independent from network topology, with a very low cost per address making eclipse attacks cheaper. In comparison, clearnet IPv4 and IPv6 networks make obtaining multiple network identities in distinct, well-known network partitions carry a non-trivial cost. Thus, it is not desirable to have a substantial portion of nodes be exclusively connected this way, as this would significantly reduce Eclipse attack costs.<ref name="pure_tor_attack">'''Why is it a bad idea to have nodes exclusively connected over Tor?''' See the [https://arxiv.org/abs/1410.6079 Bitcoin over Tor isn't a Good Idea] paper</ref> Additionally, Tor connections come with significant bandwidth and latency costs that may not be desirable for all network users.
-* VPN networks like WireGuard or OpenVPN inherently define a private network, which requires manual configuration and therefore is not a realistic avenue for automatic connections.
+Our goal includes making opportunistic encryption ubiquitously available, as that provides the best defense against large-scale attacks. That implies protecting both the manual, deliberate connections node operators instruct their software to make, and the automatic connections Bitcoin nodes make with each other based on IP addresses obtained via gossip. While encryption per se is already possible with proxy networks or VPN protocols, these are not desirable or applicable for automatic connections at scale:
+* Proxy networks like Tor or I2P introduce a separate address space, independent of network topology, with a very low cost per address making eclipse attacks cheaper. In comparison, clearnet IPv4 and IPv6 networks make obtaining multiple network identities in distinct, well-known network partitions carry a non-trivial cost. Thus, it is not desirable to have a substantial portion of nodes be exclusively connected this way, as this would significantly reduce Eclipse attack costs.<ref name="pure_tor_attack">'''Why is it a bad idea to have nodes exclusively connected over Tor?''' See the [https://arxiv.org/abs/1410.6079 Bitcoin over Tor isn't a Good Idea] paper</ref> Additionally, Tor connections come with significant bandwidth and latency costs that may not be desirable for all network users.
+* VPN protocols like WireGuard or OpenVPN inherently define a private network, which requires manual configuration and therefore is not a realistic avenue for automatic connections.
Thus, to achieve our goal, we need a solution that has minimal costs, works without configuration, and is always enabled – on top of any network layer rather than be part of the network layer.
@@ -61,16 +61,16 @@ Thus, to achieve our goal, we need a solution that has minimal costs, works with
While it would be possible to rely on an off-the-shelf transport encryption protocol such as TLS or Noise, the specific requirements of the Bitcoin P2P network laid out above make these protocols an unsuitable choice.
-The primary requirement which existing protocols fail to meet is a sufficiently modular treatment of encryption and authentication. As we argue above, whether and which form of authentication is desired in the Bitcoin P2P network will depend on the specific requirements of the involved peers (resulting in a mix of authenticated and unauthenticated connections), and thus the question of authentication should be decoupled from encryption. However, native support for a handful of standard authentication scenarios (e.g., using digital signatures and certificates) is at core of the design of existing general-purpose transport encryption protocols. This focus on authentication would not provide clear benefits for the Bitcoin P2P network but would come with a large amount of additional complexity.
+The primary requirement which existing protocols fail to meet is a sufficiently modular treatment of encryption and authentication. As we argue above, whether and which form of authentication is desired in the Bitcoin P2P network will depend on the specific requirements of the involved peers (resulting in a mix of authenticated and unauthenticated connections), and thus the question of authentication should be decoupled from encryption. However, native support for a handful of standard authentication scenarios (e.g., using digital signatures and certificates) is at the core of the design of existing general-purpose transport encryption protocols. This focus on authentication would not provide clear benefits for the Bitcoin P2P network but would come with a large amount of additional complexity.
-In contrast, our proposal instead aims for simple modular design that makes it possible to address authentication separately. Our proposal provides a foundation for authentication by exporting a ''session ID'' that uniquely identifies the encrypted channel. After an encrypted channel has been established, the two endpoints are able to use any authentication protocol to confirm that they have the same session ID. (This is sometimes called ''channel binding'' because the session ID binds the encrypted channel to the authentication protocol.) Since in our proposal, any authentication needs to run after an encrypted connection has been established, the price we pay for this modularity is a possibly higher number of roundtrips as opposed to other protocols that perform authentication alongside with the Diffie-Hellman key exchange.<ref name="channel_binding_noise_tls">'''Do other protocols not support exporting a session ID?''' While [https://noiseprotocol.org/noise.html#channel-binding Noise] and [https://datatracker.ietf.org/doc/draft-ietf-kitten-tls-channel-bindings-for-tls13/ TLS (as a draft)] offer similar protocol extensions for exporting session IDs, using channel binding for authentication is not at the focus of their design and would not avoid the bulk of additional complexity due to the native support of authentication methods. </ref> However, the resulting increase in connection establishment latency is a not a concern for Bitcoin's long-lived connections, [https://www.dsn.kastel.kit.edu/bitcoin/ which typically live for hours or even weeks].
+In contrast, our proposal instead aims for a simple modular design that makes it possible to address authentication separately. Our proposal provides a foundation for authentication by exporting a ''session ID'' that uniquely identifies the encrypted channel. After an encrypted channel has been established, the two endpoints are able to use any authentication protocol to confirm that they have the same session ID. (This is sometimes called ''channel binding'' because the session ID binds the encrypted channel to the authentication protocol.) Since in our proposal, any authentication needs to run after an encrypted connection has been established, the price we pay for this modularity is a possibly higher number of roundtrips as opposed to other protocols that perform authentication alongside the Diffie-Hellman key exchange.<ref name="channel_binding_noise_tls">'''Do other protocols not support exporting a session ID?''' While [https://noiseprotocol.org/noise.html#channel-binding Noise] and [https://datatracker.ietf.org/doc/draft-ietf-kitten-tls-channel-bindings-for-tls13/ TLS (as a draft)] offer similar protocol extensions for exporting session IDs, using channel binding for authentication is not at the focus of their design and would not avoid the bulk of additional complexity due to the native support of authentication methods. </ref> However, the resulting increase in connection establishment latency is a not a concern for Bitcoin's long-lived connections, [https://www.dsn.kastel.kit.edu/bitcoin/ which typically live for hours or even weeks].
Besides this fundamentally different treatment of authentication, further technical issues arise when applying TLS or Noise to our desired use case:
* Neither offers a pseudorandom bytestream.
* Neither offers native support for elliptic curve cryptography on the curve secp256k1 as otherwise used in Bitcoin. While using secp256k1 is not strictly necessary, it is the obvious choice is for any new asymmetric cryptography in Bitcoin because it minimizes the cryptographic hardness assumptions as well as the dependencies that Bitcoin software will need.
* Neither offers shapability of the bytestream.
-* Both provide a stream-based interface to the application layer whereas Bitcoin requires a packet-based interface, resulting in the need for an additional thin layer to perform packet serialization and deserialization.
+* Both provide a stream-based interface to the application layer, whereas Bitcoin requires a packet-based interface, resulting in the need for an additional thin layer to perform packet serialization and deserialization.
While existing protocols could be amended to address all of the aforementioned issues, this would negate the benefits of using them as off-the-shelf solution, e.g., the possibility to re-use existing implementations and security analyses.
@@ -97,7 +97,7 @@ The specification consists of three parts:
=== Transport layer specification ===
-In this section we define the encryption protocol for messages between peers.
+In this section, we define the encryption protocol for messages between peers.
==== Overview and design ====
@@ -105,7 +105,7 @@ We first give an informal overview of the entire protocol flow and packet encryp
'''Protocol flow overview'''
-Given a newly-established connection (typically TCP/IP) between two v2 P2P nodes, there are 3 phases the connection goes through. The first starts immediately, i.e. there are no v1 messages or any other bytes exchanged on the link beforehand. The two parties are called the '''initiator''' (who established the connection) and the '''responder''' (who accepted the connection).
+Given a newly established connection (typically TCP/IP) between two v2 P2P nodes, there are 3 phases the connection goes through. The first starts immediately, i.e. there are no v1 messages or any other bytes exchanged on the link beforehand. The two parties are called the '''initiator''' (who established the connection) and the '''responder''' (who accepted the connection).
# The '''Key exchange phase''', where nodes exchange data to establish shared secrets.
#* The initiator:
@@ -119,7 +119,7 @@ Given a newly-established connection (typically TCP/IP) between two v2 P2P nodes
#** Receive (the remainder of) the full 64-byte public key from the other side.
#** Use X-only<ref name="xonly_ecdh">'''Why use X-only ECDH?''' Using only the X coordinate provides the same security as using a full encoding of the secret curve point but allows for more efficient implementation by avoiding the need for square roots to compute Y coordinates.</ref> ECDH to compute a shared secret from their private key and the exchanged public keys<ref name="why_ecdh_pubkeys">'''Why is the shared secret computation a function of the exact 64-byte public encodings sent?''' This makes sure that an attacker cannot modify the public key encoding used without modifying the rest of the stream. If a third party wants the ability to modify stream bytes, they need to perform a full MitM attack on the connection.</ref>, and deterministically derive from the secret 4 '''encryption keys''' (two in each direction: one for packet lengths, one for content encryption), a '''session id''', and two 16-byte '''garbage terminators'''<ref>'''What length is sufficient for garbage terminators?''' The length of the garbage terminators determines the probability of accidental termination of a legitimate v2 connection due to garbage bytes (sent prior to ECDH) inadvertently including the terminator. 16 byte terminators with 4095 bytes of garbage yield a negligible probability of such collision which is likely orders of magnitude lower than random connection failure on the Internet.</ref><ref>'''What does a garbage terminator in the wild look like?''' <div>[[File:bip-0324/garbage_terminator.png|none|256px|A garbage terminator model TX-v2 in the wild... sent by the responder]]</div>
</ref> (one in each direction) using HKDF-SHA256.
-#** Send their 16-byte garbage terminator<ref name="why_garbage_term">'''Why does the protocol need a garbage terminator?''' While it is in principle possible to use the garbage authentication packet directly as a terminator (scan until a valid authentication packet follows), this would be significantly slower than just scanning for a fixed byte sequence, as it would require recomputing a Poly1305 tag after every received byte.</ref> followed by a '''garbage authentication packet'''<ref name="why_garbage_auth">'''Why does the protocol require a garbage authentication packet?''' Otherwise the garbage would be modifiable by a third party without consequences. We want to force any active attacker to have to maintain a full protocol state. In addition, such malleability without the consequence of connection termination could enable protocol fingerprinting.</ref>, an '''encrypted packet''' (see further) with arbitrary '''contents''', and '''associated data''' equal to the garbage.
+#** Send their 16-byte garbage terminator<ref name="why_garbage_term">'''Why does the protocol need a garbage terminator?''' While it is in principle possible to use the garbage authentication packet directly as a terminator (scan until a valid authentication packet follows), this would be significantly slower than just scanning for a fixed byte sequence, as it would require recomputing a Poly1305 tag after every received byte.</ref> followed by a '''garbage authentication packet'''<ref name="why_garbage_auth">'''Why does the protocol require a garbage authentication packet?''' Without the garbage authentication packet, the garbage would be modifiable by a third party without consequences. We want to force any active attacker to have to maintain a full protocol state. In addition, such malleability without the consequence of connection termination could enable protocol fingerprinting.</ref>, an '''encrypted packet''' (see further) with arbitrary '''contents''', and '''associated data''' equal to the garbage.
#** Receive up to 4111 bytes, stopping when encountering the garbage terminator.
#** Receive an encrypted packet, verify that it decrypts correctly with associated data set to the garbage received, and then ignore its contents.
#* At this point, both parties have the same keys, and all further communication proceeds in the form of encrypted packets. Packets have an '''ignore bit''', which makes them '''decoy packets''' if set. Decoy packets are to be ignored by the receiver apart from verifying they decrypt correctly. Either peer may send such decoy packets at any point after this. These form the primary shapability mechanism in the protocol. How and when to use them is out of scope for this document.
@@ -134,7 +134,7 @@ Given a newly-established connection (typically TCP/IP) between two v2 P2P nodes
# The '''Application phase''', where the packets exchanged have contents to be interpreted as application data.
#* Whenever either peer has a message to send, it sends a packet with that application message as '''contents'''.
-In order to provide a means of avoiding the recognizable pattern of first messages being at least 64 bytes, a future backwards-compatible upgrade to this protocol may allow both peers to send their public key + garbage + garbage terminator in multiple rounds, slicing those bytes up into messages arbitrarily, as long as progress is guaranteed.<ref name="handshake_progress">'''How can progress be guaranteed in a backwards-compatible way?''' In order to guarantee progress, it must be ensured that no deadlock occurs, i.e., no state is reached in which each party waits for the other party indefinitely. For example, any upgrade that adheres to the following conditions will guarantee progress:
+To avoid the recognizable pattern of first messages being at least 64 bytes, a future backwards-compatible upgrade to this protocol may allow both peers to send their public key + garbage + garbage terminator in multiple rounds, slicing those bytes up into messages arbitrarily, as long as progress is guaranteed.<ref name="handshake_progress">'''How can progress be guaranteed in a backwards-compatible way?''' In order to guarantee progress, it must be ensured that no deadlock occurs, i.e., no state is reached in which each party waits for the other party indefinitely. For example, any upgrade that adheres to the following conditions will guarantee progress:
* The initiator must start by sending at least as many bytes as necessary to mismatch the magic/version 12 bytes prefix.
* The responder must start sending after having received at least one byte that mismatches that 12-byte prefix.
@@ -157,19 +157,19 @@ All data on the wire after the garbage terminators takes the form of encrypted p
Each packet consists of:
* A 3-byte encrypted '''length''' field, encoding the length of the '''contents''' (between ''0'' and ''2<sup>24</sup>-1''<ref name="max_packet_length">'''Is ''2<sup>24</sup>-1'' bytes sufficient as maximum content size?''' The current Bitcoin P2P protocol has no messages which support more than 4000000 bytes of application payload. By supporting up to ''2<sup>24</sup>-1'' we can accommodate future evolutions needing more than 4 times that value. Hypothetical protocol changes that have even more data to exchange than that should probably use multiple separate messages anyway, because of the per-peer receive buffer sizes involved, and the inability to start processing a message before it is fully received. Of course, future versions of the transport protocol could change the size of the length field, if this were really needed.</ref>, inclusive).
* An authenticated encryption of the '''plaintext''', which consists of:
-** A 1-byte '''header''' which consists of transport layer protocol flags. Currently only the highest bit is defined as the '''ignore bit'''. The other bits are ignored, but this may change in future versions<ref>'''Why is the header a part of the plaintext and not included alongside the length field?''' The packet length field is the minimum information that must be available before we can leverage the standard RFC8439 AEAD. Any other data, including metadata like the header being in the content encryption makes it easier to reason about the protocol security w.r.t. data being used before it is authenticated. If the ignore bit was not part of the content, another mechanism would be needed to authenticate it; for example, it could be fed as AAD to the AEAD cipher. We feel the complexity of such an approach outweighs the benefit of saving one byte per message.</ref>.
+** A 1-byte '''header''' which consists of transport layer protocol flags. Currently, only the highest bit is defined as the '''ignore bit'''. The other bits are ignored, but this may change in future versions<ref>'''Why is the header a part of the plaintext and not included alongside the length field?''' The packet length field is the minimum information that must be available before we can leverage the standard RFC8439 AEAD. Any other data, including metadata like the header being in the content encryption makes it easier to reason about the protocol security w.r.t. data being used before it is authenticated. If the ignore bit was not part of the content, another mechanism would be needed to authenticate it; for example, it could be fed as AAD to the AEAD cipher. We feel the complexity of such an approach outweighs the benefit of saving one byte per message.</ref>.
** The variable-length '''contents'''.
-The encryption of the plaintext uses '''[https://en.wikipedia.org/wiki/ChaCha20-Poly1305 ChaCha20Poly1305]'''<ref name="why_chacha20">'''Why is ChaCha20Poly1305 chosen as basis for packet encryption?''' It is a very widely used authenticated encryption cipher (used amongst others in SSH, TLS 1.2, TLS 1.3, [https://en.wikipedia.org/wiki/QUIC QUIC], Noise, and [https://www.wireguard.com/protocol/ WireGuard]; in the latter it is currently even the only supported cipher), with very good performance in general purpose software implementations. While AES-based ciphers (including the winners in the [https://competitions.cr.yp.to/caesar.html CAESAR] competition in non-lightweight categories) perform significantly better on systems with AES hardware acceleration, they are also significantly slower in pure software implementations. We choose to optimize for the weakest hardware.</ref>, an [https://en.wikipedia.org/wiki/Authenticated_encryption authenticated encryption with associated data] (AEAD) cipher specified in [https://datatracker.ietf.org/doc/html/rfc8439 RFC 8439]. Every packet's plaintext is treated as a separate AEAD message, with a different nonce for each.
+The encryption of the plaintext uses '''[https://en.wikipedia.org/wiki/ChaCha20-Poly1305 ChaCha20Poly1305]'''<ref name="why_chacha20">'''Why is ChaCha20Poly1305 chosen as the basis for packet encryption?''' It is a very widely used authenticated encryption cipher (used among others in SSH, TLS 1.2, TLS 1.3, [https://en.wikipedia.org/wiki/QUIC QUIC], Noise, and [https://www.wireguard.com/protocol/ WireGuard]; in the latter it is currently even the only supported cipher), with very good performance in general purpose software implementations. While AES-based ciphers (including the winners in the [https://competitions.cr.yp.to/caesar.html CAESAR] competition in non-lightweight categories) perform significantly better on systems with AES hardware acceleration, they are also significantly slower in pure software implementations. We choose to optimize for the weakest hardware.</ref>, an [https://en.wikipedia.org/wiki/Authenticated_encryption authenticated encryption with associated data] (AEAD) cipher specified in [https://datatracker.ietf.org/doc/html/rfc8439 RFC 8439]. Every packet's plaintext is treated as a separate AEAD message, with a different nonce for each.
The length must be dealt with specially, as it is needed to determine packet boundaries before the whole packet is received and authenticated. As we want a stream that is pseudorandom to a passive attacker, it still needs encryption. We use unauthenticated<ref name="why_no_len_auth">'''Why is the length encryption not separately authenticated?''' Informally, the relevant security goal we aim for is to hide the number of packets and their lengths (i.e., the packet boundaries) against a passive attacker that receives the bytestream without timing or fragmentation information. (A formal definition can be found for example in [https://himsen.github.io/pdf/thesis.pdf Hansen 2016 (Definition 22)] under the name "boundary hiding against chosen-plaintext attacks (BH-CPA)".) However, we do not aim to hide packet boundaries against active attackers because active attackers can always exploit the fact that the Bitcoin P2P protocol is largely query-response based: they can trickle the bytes on the stream one-by-one unmodified and observe when a response comes (see [https://himsen.github.io/pdf/thesis.pdf Hansen 2016 (Section 3.9)] for a in-depth discussion). With that in mind, we accept that an active (non-MitM) attacker is able to figure out some information about packet boundaries by flipping certain bits in the unauthenticated length field, and observing the other side disconnecting immediately or later. Thus, we choose to use unauthenticated encryption for the length data, which is sufficient to achieve boundary hiding against passive attackers, and saves 16 bytes of bandwidth per packet.</ref> '''ChaCha20''' encryption for this, with an independent key. Note that the plaintext length is still implicitly authenticated by the encryption of the plaintext, but this can only be verified after receiving the whole packet. This design is inspired by that of the ChaCha20Poly1305 cipher suite in [http://bxr.su/OpenBSD/usr.bin/ssh/PROTOCOL.chacha20poly1305 OpenSSH].<ref name="openssl_changes">'''How does packet encryption differ from the OpenSSH design?''' The differences are:
* The length field is only 3 bytes instead of 4, as that is sufficient for our purposes.
* Length encryption keeps drawing pseudorandom bytes from the same ChaCha20 cipher for multiple packets, rather than incrementing the nonce for every packet.
* The Poly1305 authentication tag only covers the encrypted plaintext, and not the encrypted length field. This means that plaintext encryption uses the standard ChaCha20Poly1305 construction without any modifications, maximizing applicability of analysis and review of that cipher. The length encryption can be seen as a separate layer, using a separate key, and thus cannot affect any of the confidentiality or integrity guarantees of the plaintext encryption. On the other hand, this change w.r.t. OpenSSH also does not worsen any properties, as incorrect lengths will still trigger authentication failure for the overall packet (the plaintext length is implicitly authenticated by ChaCha20Poly1305).
-* A hash step is performed every 224<ref name="rekey_interval">'''How was the rekeying interval 224 chosen?''' Assuming a node sends only ping messages every 20 minutes (the timeout interval for post-[https://github.com/bitcoin/bips/blob/master/bip-0031.mediawiki BIP31] connections) on a connection, the node will transmit 224 packets in about 3.11 days. This means ''soft rekeying'' after a fixed number of packets automatically translates to an upper-bound of time interval for rekeying, while being much simpler to coordinate than an actual time-based rekeying regime. At the same time, doing it once every 224 messages is sufficiently infrequent that it has only negligible impact on performance. Furthermore, 224 times 3 bytes (the number of bytes consumed by each length encryption) is 672, which is a multiple of 64 minus 32. This means that at the end of 224 length encryptions, exactly 32 bytes of keystream data remain that can be used as next key.</ref> messages to rekey the the encryption ciphers, in order to provide forward security.
+* A hash step is performed every 224<ref name="rekey_interval">'''How was the rekeying interval 224 chosen?''' Assuming a node sends only ping messages every 20 minutes (the timeout interval for post-[https://github.com/bitcoin/bips/blob/master/bip-0031.mediawiki BIP31] connections) on a connection, the node will transmit 224 packets in about 3.11 days. This means ''soft rekeying'' after a fixed number of packets automatically translates to an upper-bound of time interval for rekeying, while being much simpler to coordinate than an actual time-based rekeying regime. At the same time, doing it once every 224 messages is sufficiently infrequent that it has only negligible impact on performance. Furthermore, 224 times 3 bytes (the number of bytes consumed by each length encryption) is 672, which is a multiple of 64 minus 32. This means that at the end of 224 length encryptions, exactly 32 bytes of keystream data remain that can be used as next key.</ref> messages to rekey the encryption ciphers, in order to provide forward security.
</ref> Because only fixed-length chunks (3-byte length fields) are encrypted, we do not need to treat all length chunks as separate messages. Instead, a single cipher (with the same nonce) is used for multiple consecutive length fields. This avoids wasting 61 pseudorandom bytes per packet, and makes the cost of having a separate cipher for length encryption negligible.<ref name="ok_to_batch">'''Is it acceptable to use a less standard construction for length encryption?''' The fact that multiple (non-overlapping) bytes generated by a single ChaCha20 cipher are used for the encryption of multiple consecutive length fields is uncommon. We feel the performance cost gained by this deviation is worth it (especially for small packets, which are very common in Bitcoin's P2P protocol), given the low guarantees that are feasible for length encryption in the first place, and the result is still sufficient to provide pseudorandomness from the view of passive attackers. For plaintext encryption, we independently use a very standard construction, as the stakes for confidentiality and integrity there are much higher.</ref>
-In order to provide forward security<ref name="rekey">'''What value does forward security provide?''' Re-keying ensures [https://eprint.iacr.org/2001/035.pdf forward secrecy within a session], i.e., an attacker compromising the current session secrets cannot derive past encryption keys in the same session.</ref><ref>'''Why have a cipher with forward secrecy but no periodical refresh of the ECDH key exchange?''' Our cipher ratchets encryption keys forward in order to protect messages encrypted under ''past'' encryption keys. In contrast, re-performing ECDH key exchange would protect messages encrypted under ''future'' encryption keys, i.e., it would re-establish security after the attacker had compromised one of the peers ''temporarily'' (e.g., the attacker obtains a memory dump). We do not believe protecting against that is a priority: an attacker that, for whatever reason, is capable of an attack that reveals encryption keys (or other session secrets) of a peer once is likely capable of performing the same attack again after peers have re-performed the ECDH key exchange. Thus, we do not believe the benefits of re-performing key exchange outweigh the additional complexity that comes with the necessary coordination between the peers. We note that the initiator could choose to close and re-open the entire connection in order to force a refresh of the ECDH key exchange, but that introduces other issues: a connection slot needs to be kept open at the responder side, it is not cryptographically guaranteed that really the same initiator will use it, and the observable TCP reset and handshake may create a detectable pattern.</ref>, the encryption keys for both plaintext and length encryption are cycled every 224 messages, by switching to a new key that is generated by the key stream using the old key.
+In order to provide forward security<ref name="rekey">'''What value does forward security provide?''' Re-keying ensures [https://eprint.iacr.org/2001/035.pdf forward secrecy within a session], i.e., an attacker compromising the current session secrets cannot derive past encryption keys in the same session.</ref><ref>'''Why have a cipher with forward secrecy but no periodical refresh of the ECDH key exchange?''' Our cipher ratchets encryption keys forward in order to protect messages encrypted under ''past'' encryption keys. In contrast, re-performing ECDH key exchange would protect messages encrypted under ''future'' encryption keys, i.e., it would re-establish security after the attacker had compromised one of the peers ''temporarily'' (e.g., the attacker obtains a memory dump). We do not believe protecting against that is a priority: an attacker that, for whatever reason, is capable of an attack that reveals encryption keys (or other session secrets) of a peer once is likely capable of performing the same attack again after peers have re-performed the ECDH key exchange. Thus, we do not believe the benefits of re-performing key exchange outweigh the additional complexity that comes with the necessary coordination between the peers. We note that the initiator could choose to close and re-open the entire connection to force a refresh of the ECDH key exchange, but that introduces other issues: a connection slot needs to be kept open at the responder side, it is not cryptographically guaranteed that really the same initiator will use it, and the observable TCP reset and handshake may create a detectable pattern.</ref>, the encryption keys for both plaintext and length encryption are cycled every 224 messages, by switching to a new key that is generated by the key stream using the old key.
==== Handshake: key exchange and version negotiation ====
@@ -242,19 +242,19 @@ To define the needed functions, we first introduce a helper function, matching t
** For every ''x'' in ''{u + 4Y<sup>2</sup>, (-X/Y - u)/2, (X/Y - u)/2}'' (all ''mod p''; the order matters):
*** If ''lift_x(x)'' succeeds, return ''x''. There is at least one such ''x''.
-To find encodings of a given X coordinate ''x'', we first need the inverse of ''XSwiftEC''. The function ''XSwiftECInv(x, u, case)'' either returns ''t'' such that ''XSwiftEC(u, t) = x'', or ''None''. The ''case'' variable is an integer in range 0 to 7 inclusive, which selects which of the up to 8 valid such ''t'' values to return:
+To find encodings of a given X coordinate ''x'', we first need the inverse of ''XSwiftEC''. The function ''XSwiftECInv(x, u, case)'' either returns ''t'' such that ''XSwiftEC(u, t) = x'', or ''None''. The ''case'' variable is an integer in range ''0..7'', which selects which of the up to 8 valid such ''t'' values to return:
* ''XSwiftECInv(x, u, case)'':
** If ''case & 2 = 0'':
*** If ''lift_x(-x - u)'' succeeds, return ''None''.
*** Let ''v = x''.
*** Let ''s = -(u<sup>3</sup> + 7)/(u<sup>2</sup> + uv + v<sup>2</sup>) (mod p)''.
-** If ''case & 2 = 2'':
+** Else (''case & 2 = 2''):
*** Let ''s = x - u (mod p)''.
*** If ''s = 0'', return ''None''.
*** Let ''r'' be the square root of ''-s(4(u<sup>3</sup> + 7) + 3u<sup>2</sup>s) (mod p).''<ref name="modsqrt">'''How to compute a square root mod ''p''?''' Due to the structure of ''p'', a candidate for the square root of ''a'' mod ''p'' can be computed as ''x = a<sup>(p+1)/4</sup> mod p''. If ''a'' is not a square mod ''p'', this formula returns the square root of ''-a mod p'' instead, so it is necessary to verify that ''x<sup>2</sup> mod p = a''. If that is the case ''-x mod p'' is a solution too, but we define "the" square root to be equal to that expression (the square root will therefore always be a square itself, as ''(p+1)/4'' is even). This algorithm is a specialization of the [https://en.wikipedia.org/wiki/Tonelli%E2%80%93Shanks_algorithm Tonelli-Shanks algorithm].</ref> Return ''None'' if it does not exist.
-** If ''case & 1 = 1'' and ''r = 0'', return ''None''.
-*** Let ''v = (-u + r/s)/2''.
+*** If ''case & 1 = 1'' and ''r = 0'', return ''None''.
+*** Let ''v = (r/s - u)/2''.
** Let ''w'' be the square root of ''s (mod p)''. Return ''None'' if it does not exist.
** If ''case & 5 = 0'', return ''-w(u(1 - c)/2 + v)''.
** If ''case & 5 = 1'', return ''w(u(1 + c)/2 + v)''.
@@ -357,7 +357,7 @@ def respond_v2_handshake(peer, garbage_len):
use_v1_protocol()
</pre>
-Upon receiving the encoded responder public key, the initiator derives the shared ECDH secret and instantiates the encrypted transport. It then sends the derived 16-byte <code>initiator_garbage_terminator</code> followed by an authenticated, encrypted packet with empty contents<ref name="send_empty_garbauth">'''Does the content of the garbage authentication packet need to be empty?''' The receiver ignores the content of the garbage authentication packet, so its content can be anything, and it can in principle be used as a shaping mechanism too. There is however no need for that, as immediately afterwards the initiator can start using decoy packets as (much more flexible) shaping mechanism instead.</ref> to authenticate the garbage, and its own version packet. It then receives the responder's garbage and garbage authentication packet (delimited by the garbage terminator), and checks if the garbage is authenticated correctly. The responder performs very similar steps, but includes the earlier received prefix bytes in the public key. As mentioned before, the encrypted packets for the '''version negotiation phase''' can be piggybacked with the garbage authentication packet to minimize roundtrips.
+Upon receiving the encoded responder public key, the initiator derives the shared ECDH secret and instantiates the encrypted transport. It then sends the derived 16-byte <code>initiator_garbage_terminator</code> followed by an authenticated, encrypted packet with empty contents<ref name="send_empty_garbauth">'''Does the content of the garbage authentication packet need to be empty?''' The receiver ignores the content of the garbage authentication packet, so its content can be anything, and it can in principle be used as a shaping mechanism too. There is however no need for that, as immediately afterward the initiator can start using decoy packets as (a much more flexible) shaping mechanism instead.</ref> to authenticate the garbage, and its own version packet. It then receives the responder's garbage and garbage authentication packet (delimited by the garbage terminator), and checks if the garbage is authenticated correctly. The responder performs very similar steps but includes the earlier received prefix bytes in the public key. As mentioned before, the encrypted packets for the '''version negotiation phase''' can be piggybacked with the garbage authentication packet to minimize roundtrips.
<pre>
def complete_handshake(peer, initiating):
@@ -405,7 +405,7 @@ These will be used for plaintext encryption and length encryption, respectively.
To provide re-keying every 224 packets, we specify two wrappers.
-The first is '''FSChaCha20Poly1305''', which represents a ChaCha20Poly1305 AEAD, which automatically changes the nonce after every message, and rekeys every 224 messages by encrypting 32 zero bytes<ref name="rekey_why_aead">'''Why is rekeying implemented in terms of an invocation of the AEAD?''' This means the FSChaCha20Poly1305 wrapper can be thought of as a pure layer around the ChaCha20Poly1305 AEAD. Actual implementations can take advantage of the fact that this formulation is equivalent to using byte 64 through 95 of the keystream output of the underlying ChaCha20 cipher as new key, avoiding the need for Poly1305 in the process.</ref>, and using the first 32 bytes of the result. Each message will be used for one packet. Note that in our protocol, any FSChaCha20Poly1305 instance is always either exclusively encryption or exclusively decryption, as separate instances are used for each direction of the protocol. The nonce used for a message is composed of the 32-bit little endian encoding of the number of messages with the current key, followed by the 64-bit little endian encoding of the number of rekeyings performed. For rekeying, the first 32-bit integer is set to ''0xffffffff''.
+The first is '''FSChaCha20Poly1305''', which represents a ChaCha20Poly1305 AEAD, which automatically changes the nonce after every message, and rekeys every 224 messages by encrypting 32 zero bytes<ref name="rekey_why_aead">'''Why is rekeying implemented in terms of an invocation of the AEAD?''' This means the FSChaCha20Poly1305 wrapper can be thought of as a pure layer around the ChaCha20Poly1305 AEAD. Actual implementations can take advantage of the fact that this formulation is equivalent to using byte 64 through 95 of the keystream output of the underlying ChaCha20 cipher as new key, avoiding the need for Poly1305 in the process.</ref>, and using the first 32 bytes of the result. Each message will be used for one packet. Note that in our protocol, any FSChaCha20Poly1305 instance is always either exclusively encryption or exclusively decryption, as separate instances are used for each direction of the protocol. The nonce used for a message is composed of the 32-bit little-endian encoding of the number of messages with the current key, followed by the 64-bit little-endian encoding of the number of rekeyings performed. For rekeying, the first 32-bit integer is set to ''0xffffffff''.
<pre>
REKEY_INTERVAL = 224
@@ -437,7 +437,7 @@ class FSChaCha20Poly1305:
return self.crypt(aad, plaintext, False)
</pre>
-The second is '''FSChaCha20''', a (single) stream cipher which is used for the lengths of all packets. Encryption and decryption are identical here, so a single function <code>crypt</code> is exposed. It XORs the input with bytes generated using the ChaCha20 block function, rekeying every 224 chunks using the next 32 bytes of the block function output as new key. A ''chunk'' refers here to a single invocation of <code>crypt</code>. As explained before, the same cipher is used for 224 consecutive chunks, to avoid wasting cipher output. The nonce used for these batches of 224 chunks is composed of 4 zero bytes followed by the 64-bit little endian encoding of the number of rekeyings performed. The block counter is reset to 0 after every rekeying.
+The second is '''FSChaCha20''', a (single) stream cipher which is used for the lengths of all packets. Encryption and decryption are identical here, so a single function <code>crypt</code> is exposed. It XORs the input with bytes generated using the ChaCha20 block function, rekeying every 224 chunks using the next 32 bytes of the block function output as new key. A ''chunk'' refers here to a single invocation of <code>crypt</code>. As explained before, the same cipher is used for 224 consecutive chunks, to avoid wasting cipher output. The nonce used for these batches of 224 chunks is composed of 4 zero bytes followed by the 64-bit little-endian encoding of the number of rekeyings performed. The block counter is reset to 0 after every rekeying.
<pre>
class FSChaCha20:
@@ -515,12 +515,12 @@ v2 Bitcoin P2P transport layer packets use the encrypted message structure shown
{|class="wikitable"
! Field !! Size in bytes !! Comments
|-
-| <code>message_type</code> || ''1..13'' || either a one byte ID or an ASCII string prefixed with a length byte
+| <code>message_type</code> || 1 or 13 || either a one byte ID in range ''1..255'' or <code>b'\x00'</code> followed by a 12-byte ASCII message type (as in the v1 P2P protocol)
|-
| <code>message_payload</code> || <code>message_length</code> || message payload
|}
-If the first byte of <code>message_type</code> is in the range ''1..12'', it is interpreted as the number of ASCII bytes that follow for the message type. If it is in the range ''13..255'', it is interpreted as a message type ID. This structure results in smaller messages than the v1 protocol as most messages sent/received will have a message type ID.<ref name="smaller_messages">'''How do the length between v1 and v2 compare?''' For messages that use the 1-byte short message type ID, v2 packets use 3 bytes less per message than v1.</ref>
+If the first byte of <code>message_type</code> is <code>b'\x00'</code>, the following 12 bytes are interpreted as an ASCII message type (as in the v1 P2P protocol), trailing padded with <code>b'\x00'</code> as necessary. If the first byte of <code>message_type</code> is in the range ''1..255'', it is interpreted as a message type ID. This structure results in smaller messages than the v1 protocol, as most messages sent/received will have a message type ID. We recommend reserving 1-byte type IDs for message types that are sent more than once per direction per connection.<ref name="smaller_messages">'''How do the lengths between v1 and v2 compare?''' For messages that use the 1-byte short message type ID, v2 packets use 3 bytes less per message than v1.</ref><ref name"fixed_length_long_ids">'''Why not allow variable length long message type IDs?''' Allowing for variable length long IDs reduces the available 1-byte ID space by 12 (to encode the length itself) and incentivizes less descriptive message types. In addition, limiting message types to fixed lengths of 1 or 13 hampers traffic analysis.</ref>
The following table lists currently defined message type IDs:
@@ -533,50 +533,38 @@ The following table lists currently defined message type IDs:
!3
|-
!+0
-|(undefined)||(1 byte string)||(2 byte string)||(3 byte string)
+|(12 bytes follow)||<code>ADDR</code>||<code>BLOCK</code>||<code>BLOCKTXN</code>
|-
!+4
-|(4 byte string)||(5 byte string)||(6 byte string)||(7 byte string)
+|<code>CMPCTBLOCK</code>||<code>FEEFILTER</code>||<code>FILTERADD</code>||<code>FILTERCLEAR</code>
|-
!+8
-|(8 byte string)||(9 byte string)||(10 byte string)||(11 byte string)
+|<code>FILTERLOAD</code>||<code>GETBLOCKS</code>||<code>GETBLOCKTXN</code>||<code>GETDATA</code>
|-
!+12
-|(12 byte string)||<code>ADDR</code>||<code>BLOCK</code>||<code>BLOCKTXN</code>
+|<code>GETHEADERS</code>||<code>HEADERS</code>||<code>INV</code>||<code>MEMPOOL</code>
|-
!+16
-|<code>CMPCTBLOCK</code>||<code>FEEFILTER</code>||<code>FILTERADD</code>||<code>FILTERCLEAR</code>
+|<code>MERKLEBLOCK</code>||<code>NOTFOUND</code>||<code>PING</code>||<code>PONG</code>
|-
!+20
-|<code>FILTERLOAD</code>||<code>GETADDR</code>||<code>GETBLOCKS</code>||<code>GETBLOCKTXN</code>
+|<code>SENDCMPCT</code>||<code>TX</code>||<code>GETCFILTERS</code>||<code>CFILTER</code>
|-
!+24
-|<code>GETDATA</code>||<code>GETHEADERS</code>||<code>HEADERS</code>||<code>INV</code>
+|<code>GETCFHEADERS</code>||<code>CFHEADERS</code>||<code>GETCFCHECKPT</code>||<code>CFCHECKPT</code>
|-
!+28
-|<code>MEMPOOL</code>||<code>MERKLEBLOCK</code>||<code>NOTFOUND</code>||<code>PING</code>
+|<code>ADDRV2</code>||<code>REQRECON</code>||<code>SKETCH</code>||<code>REQSKETCHEXT</code>
|-
!+32
-|<code>PONG</code>||<code>SENDCMPCT</code>||<code>SENDHEADERS</code>||<code>TX</code>
-|-
-!+36
-|<code>VERACK</code>||<code>VERSION</code>||<code>GETCFILTERS</code>||<code>CFILTER</code>
-|-
-!+40
-|<code>GETCFHEADERS</code>||<code>CFHEADERS</code>||<code>GETCFCHECKPT</code>||<code>CFCHECKPT</code>
-|-
-!+44
-|<code>WTXIDRELAY</code>||<code>ADDRV2</code>||<code>SENDADDRV2</code>||<code>SENDTXRCNCL</code>
-|-
-!+48
-|<code>REQRECON</code>||<code>SKETCH</code>||<code>REQSKETCHEXT</code>||<code>RECONCILDIFF</code>
+|<code>RECONCILDIFF</code>
|-
-!&geq;52
+!&geq;33
|| colspan="4" | (undefined)
|}
-The message types may be updated separately after BIP finalization.
+Additional message types may be added separately after BIP finalization.
=== Signaling specification ===
==== Signaling v2 support ====
@@ -586,8 +574,8 @@ Peers supporting the v2 transport protocol signal support by advertising the <co
== Test Vectors ==
For development and testing purposes, we provide a collection of test vectors in CSV format, and a naive, highly inefficient, [[bip-0324/reference.py|reference implementation]] of the relevant algorithms. This code is for demonstration purposes only:
-* [[bip-0324/ellswift_decode_test_vectors.csv|XElligatorSwift decoding vectors]] give examples of ElligatorSwift-encoded public keys, and the X coordinate they map to.
-* [[bip-0324/xswiftec_inv_test_vectors.csv|XSwiftECInv vectors]] give examples of ''(u, x)'' pairs, and the various ''t'' values that ''xswiftec_inv'' maps them to.
+* [[bip-0324/ellswift_decode_test_vectors.csv|XElligatorSwift decoding vectors]] provide examples of ElligatorSwift-encoded public keys, and the X coordinate they map to.
+* [[bip-0324/xswiftec_inv_test_vectors.csv|XSwiftECInv vectors]] provide examples of ''(u, x)'' pairs, and the various ''t'' values that ''xswiftec_inv'' maps them to.
* [[bip-0324/packet_encoding_test_vectors.csv|Packet encoding vectors]] illustrate the lifecycle of the authenticated encryption scheme proposed in this document.
== Rationale and References ==
@@ -599,3 +587,4 @@ Thanks to everyone (last name order) that helped invent and develop the ideas in
* Matt Corallo
* Lloyd Fournier
* Gregory Maxwell
+* Anthony Towns