Bitcoin Cash Difficulty Algorithm Debate Heats Up With ...

BlockTorrent: The famous algorithm which BitTorrent uses for SHARING BIG FILES. Which you probably thought Bitcoin *also* uses for SHARING NEW BLOCKS (which are also getting kinda BIG). But Bitcoin *doesn't* torrent *new* blocks (while relaying). It only torrents *old* blocks (while sync-ing). Why?

This post is being provided to further disseminate an existing proposal:
This proposal was originally presented by jtoomim back in September of 2015 - on the bitcoin_dev mailing list (full text at the end of this OP), and on reddit:
https://np.reddit.com/btc/comments/3zo72i/fyi_ujtoomim_is_working_on_a_scaling_proposal/cyomgj3
Here's a TL;DR, in his words:
BlockTorrenting
For initial block sync, [Bitcoin] sort of works [like BitTorrent] already.
You download a different block from each peer. That's fine.
However, a mechanism does not currently exist for downloading a portion of each [new] block from a different peer.
That's what I want to add.
~ jtoomim
The more detailed version of this "BlockTorrenting" proposal (as presented by jtoomim on the bitcoin_dev mailing list) is linked and copied / reformatted at the end of this OP.
Meanwhile here are some observations from me as a concerned member of the Bitcoin-using public.
Questions:
Whoa??
WTF???
Bitcoin doesn't do this kind of "blocktorrenting" already??
But.. But... I thought Bitcoin was "p2p" and "based on BitTorrent"...
... because (as we all know) Bitcoin has to download giant files.
Oh...
Bitcoin only "torrents" when sharing one certain kind of really big file: the existing blockchain, when a node is being initialized.
But Bitcoin doesn't "torrent" when sharing another certain kind of moderately big file (a file whose size, by the way, has been notoriously and steadily growing over the years to the point where the system running the legacy "Core"/Blockstream Bitcoin implementation is starting to become dangerously congested - no matter what some delusional clowns "Core" devs may say): ie, the world's most wildly popular, industrial-strength "p2p file sharing algorithm" is mysteriously not being used where the Bitcoin network needs it the most in order to get transactions confirmed on-chain: when a a newly found block needs to be shared among nodes, when a node is relaying new blocks.
https://np.reddit.com/Bitcoin+bitcoinxt+bitcoin_uncensored+btc+bitcoin_classic/search?q=blocktorrent&restrict_sr=on
How many of you (honestly) just simply assumed that this algorithm was already being used in Bitcoin - since we've all been told that "Bitcoin is p2p, like BitTorrent"?
As it turns out - the only part of Bitcoin which has been p2p up until now is the "sync-ing a new full-node" part.
The "running an existing full-node" part of Bitcoin has never been implemented as truly "p2p2" yet!!!1!!!
And this is precisely the part of the system that we've been wasting all of our time (and destroying the community) fighting over for the past few months - because the so-called "experts" from the legacy "Core"/Blockstream Bitcoin implementation ignored this proposal!
Why?
Why have all the so-called "experts" at "Core"/Blockstream ignored this obvious well-known effective & popular & tested & successful algorithm for doing "blocktorrenting" to torrent each new block being relayed?
Why have the "Core"/Blockstream devs failed to p2p-ize the most central, fundamental networking aspect of Bitcoin - the part where blocks get propagated, the part we've been fighting about for the past few years?
This algorithm for "torrenting" a big file in parallel from peers is the very definition of "p2p".
It "surgically" attacks the whole problem of sharing big files in the most elegant and efficient way possible: right at the lowest level of the bottleneck itself, cleverly chunking a file and uploading it in parallel to multiple peers.
Everyone knows torrenting works. Why isn't Bitcoin using it for its new blocks?
As millions of torrenters already know (but evidently all the so-called "experts" at Core/Blocsktream seem to have conveniently forgotten), "torrenting" a file (breaking a file into chunks and then offering a different chunk to each peer to "get it out to everyone fast" - before your particular node even has the entire file) is such a well-known / feasible / obvious / accepted / battle-tested / highly efficient algorithm for "parallelizing" (and thereby significantly accelerating) the sharing of big files among peers, that many people simply assumed that Bitcoin had already been doing this kind of "torrenting of new-blocks" these whole past 7 years.
But Bitcoin doesn't do this - yet!
None of the Core/Blockstream devs (and the Chinese miners who follow them) have prioritized p2p-izing the most central and most vital and most resource-consuming function of the Bitcoin network - the propagation of new blocks!
Maybe it took someone who's both a miner and a dev to "scratch" this particular "itch": Jonathan Toomim jtoomim.
  • A miner + dev who gets very little attention / respect from the Core/Blockstream devs (and from the Chinese miners who follow them) - perhaps because they feel threatened by a competing implementation?
  • A miner + dev who may have come up with the simplest and safest and most effective algorithmic (ie, software-based, not hardware-consuming) scaling proposal of anyone!
  • A dev who who is not paid by Blockstream, and who is therefore free from the secret, undisclosed corporate restraints / confidentiality agreements imposed by the shadowy fiat venture-capitalists and legacy power elite who appear to be attempting to cripple our code and muzzle our devs.
  • A miner who has the dignity not to let himself be forced into signing a loyalty oath to any corporate overlords after being locked in a room until 3 AM.
Precisely because jtoomim is both a indepdendent miner and an independent dev...
  • He knows what needs to be done.
  • He knows how to do it.
  • He is free to go ahead and do it - in a permissionless, decentralized fashion.
Possible bonus: The "blocktorrent" algorithm would help the most in the upload direction - which is precisely where Bitcoin scaling needs the most help!
Consider the "upload" direction for a relatively slow full-node - such as Luke-Jr, who reports that his internet is so slow, he has not been able to run a full-node since mid-2015.
The upload direction is the direction which everyone says has been the biggest problem with Bitcoin - because, in order for a full-node to be "useful" to the network:
  • it has to able to upload a new block to (at least) 8 peers,
  • which places (at least) 8x more "demand" on the full-node's upload bandwidth.
The brilliant, simple proposed "blocktorrent" algorithm from jtoomim (already proven to work with Bram Cohen's BitTorrent protocol, and also already proven to work for initial sync-ing of Bitcoin full-nodes - but still un-implemented for ongoing relaying among full-nodes) looks like it would provide a significant performance improvement precisely at this tightest "bottleneck" in the system, the crucial central nexus where most of the "traffic" (and the congestion) is happening: the relaying of new blocks from "slower" full-nodes.
The detailed explanation for how this helps "slower" nodes when uploading, is as follows.
Say you are a "slower" node.
You need to send a new block out to (at least) 8 peers - but your "upload" bandwidth is really slow.
If you were to split the file into (at least) 8 "chunks", and then upload a different one of these (at least) 8 "chunks" to each of your (at least) 8 peers - then (if you were using "blocktorrenting") it only would take you 1/8 (or less) of the "normal" time to do this (compared to the naïve legacy "Core" algorithm).
Now the new block which your "slower" node was attempting to upload is already "out there" - in 1/8 (or less) of the "normal" time compared to the naïve legacy "Core" algorithm.[ 1 ]
... [ 1 ] There will of course also be a tiny amount of extra overhead involved due to the "housekeeping" performed by the "blocktorrent" algorithm itself - involving some additional processing and communicating to decompose the block into chunks and to organize the relaying of different chunks to different peers and the recompose the chunks into a block again (all of which, depending on the size of the block and the latency of your node's connections to its peers, would in most cases be negligible compared to the much-greater speed-up provided by the "blocktorrent" algorithm itself).
Now that your block is "out there" at those 8 (or more) peer nodes to whom you just blocktorrented it in 1/8 (or less) of the time - it has now been liberated from the "bottleneck" of your "slower" node.
In fact, its further propagation across the net may now be able to leverage much faster upload speeds from some other node(s) which have "blocktorrent"-downloaded it in pieces from you (and other peers) - and which might be faster relaying it along, than your "slower" node.
For some mysterious reason, the legacy Bitcoin implementation from "Core"/Blockstream has not been doing this kind of "blocktorrenting" for new blocks.
It's only been doing this torrenting for old blocks. The blocks that have already been confirmed.
Which is fine.
But we also obviously need this sort of "torrenting" to be done for each new block is currently being confirmed.
And this is where the entire friggin' "scaling bottleneck" is occurring, which we just wasted the past few years "debating" about.
Just sit down and think about this for a minute.
We've had all these so-called "experts" (Core/Blockstream devs and other small-block proponents) telling us for years that guys like Hearn and Gavin and repos like Classic and XT and BU were "wrong" or at least "unserious" because they "merely" proposed "brute-force" scaling: ie, scaling which would simply place more demands on finite resources (specifically: on the upload bandwidth from full-nodes - who need to relay to at least 8 peer full-nodes in order to be considered "useful" to the network).
These "experts" have been beating us over the head this whole time, telling us that we have to figure out some (really complicated, unproven, inefficient and centralized) clever scaling algorithms to squeeze more efficiency out of existing infrastructure.
And here is the most well-known / feasible / obvious / accepted / battle-tested algorithm for "parallelizing" (and thereby massively accelerating) the sharing of big file among peers - the BitTorrent algorithm itself, the gold standard of p2p relaying par excellence, which has been a major success on the Internet for decades, at one point accounting for nearly 1/3 of all traffic on the Internet itself - and which is also already being used in one part of Bitcoin: during the phase of sync-ing a new node.
And apparently pretty much only jtoomim has been talking about using it for the actual relaying of new blocks - while Core/Blockstream devs have so far basically ignored this simple and safe and efficient proposal.
And then the small-block sycophants (reddit users or wannabe C/C++ programmers who have beaten into submission and/or by the FUD and "technological pessimism" of the Core/Blockstream devs, and by the censorhip on their legacy forum), they all "laugh" at Classic and proclaim "Bitcoin doesn't need another dev team - all the 'experts' are at Core / Blockstream"...
...when in fact it actually looks like jtoomim (an independent minedev, free from the propaganda and secret details of the corporate agenda of Core/Blockstream - who works on the Classic Bitcoin implementation) may have proposed the simplest and safest and most effective scaling algorithm in this whole debate.
By the way, his proposal estimates that we could get about 1 magnitude greater throughput, based on the typical latency and blocksize for blocks of around 20 MB and bandwidth of around 8 Mbps (which seems like a pretty normal scenario).
So why the fuck isn't this being done yet?
This is such a well-known / feasible / obvious / accepted / battle-tested algorithm for "parallelizing" (and thereby significantly accelerating) the sharing of big files among peers:
  • It's already being used for the (currently) 65 gigabytes of "blocks in the existing blockchain" itself - the phase where a new node has to sync with the blockchain.
  • It's already being used in BitTorrent - although the BitTorrent protocol has been optimized more to maximize throughput, whereas it would probably be a good idea to optimize the BlockTorrent protocol to minimize latency (since avoiding orphans is the big issue here) - which I'm fairly sure should be quite doable.
This algorithm is so trivial / obvious / straightforward / feasible / well-known / proven that I (and probably many others) simply assumed that Bitcoin had been doing this all along!
But it has never been implemented.
There is however finally a post about it today on the score-hidden forum /Bitcoin, from eragmus:
[bitcoin-dev] BlockTorrent: Torrent-style new-block propagation on Merkle trees
https://np.reddit.com/Bitcoin/comments/484nbx/bitcoindev_blocktorrent_torrentstyle_newblock/
And, predictably, the top-voted comment there is a comment telling us why it will never work.
And the comment after that comment is from the author of the proposal, jtoomim, explaining why it would work.
Score hidden on all those comments.
Because the immature tyrant theymos still doesn't understand the inherent advantages of people using reddit's upvoting & downvoting tools to hold decentralized, permissionless debates online.
Whatever.
Questions:
(1) Would this "BlockTorrenting" algorithm from jtoomim really work?
(2) If so, why hasn't it been implemented yet?
(3) Specifically: With all the "dev firepower" (and $76 million in venture capital) available at Core/Blockstream, why have they not prioritized implementing this simple and safe and highly effective solution?
(4) Even more specifically: Are there undisclosed strategies / agreements / restraints imposed by Blockstream financial investors on Bitcoin "Core" devs which have been preventing further discussion and eventual implementation of this possible simple & safe & efficient scaling solution?
Here is the more-detailed version of this proposal, presented by Jonathan Toomim jtoomim back in September of 2015 on the bitcoin-dev mailing list (and pretty much ignored for months by almost all the "experts" there):
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011176.html
As I understand it, the current block propagation algorithm is this:
  1. A node mines a block.
  2. It notifies its peers that it has a new block with an inv. Typical nodes have 8 peers.
  3. The peers respond that they have not seen it, and request the block with getdata [hash].
  4. The node sends out the block in parallel to all 8 peers simultaneously. If the node's upstream bandwidth is limiting, then all peers will receive most of the block before any peer receives all of the block. The block is sent out as the small header followed by a list of transactions.
  5. Once a peer completes the download, it verifies the block, then enters step 2.
(If I'm missing anything, please let me know.)
The main problem with this algorithm is that it requires a peer to have the full block before it does any uploading to other peers in the p2p mesh. This slows down block propagation to:
O( p • log_p(n) ) 
where:
  • n is the number of peers in the mesh,
  • p is the number of peers transmitted to simultaneously.
It's like the Napster era of file-sharing. We can do much better than this.
Bittorrent can be an example for us.
Bittorrent splits the file to be shared into a bunch of chunks, and hashes each chunk.
Downloaders (leeches) grab the list of hashes, then start requesting their peers for the chunks out-of-order.
As each leech completes a chunk and verifies it against the hash, it begins to share those chunks with other leeches.
Total propagation time for large files can be approximately equal to the transmission time for an FTP upload.
Sometimes it's significantly slower, but often it's actually faster due to less bottlenecking on a single connection and better resistance to packet/connection loss.
(This could be relevant for crossing the Chinese border, since the Great Firewall tends to produce random packet loss, especially on encrypted connections.)
Bitcoin uses a data structure for transactions with hashes built-in. We can use that in lieu of Bittorrent's file chunks.
A Bittorrent-inspired algorithm might be something like this:
  1. (Optional steps to build a Merkle cache; described later)
  2. A seed node mines a block.
  3. It notifies its peers that it has a new block with an extended version of inv.
  4. The leech peers request the block header.
  5. The seed sends the block header. The leech code path splits into two.
  6. (a) The leeches verify the block header, including the PoW. If the header is valid,
  7. (a) They notify their peers that they have a header for an unverified new block with an extended version of inv, looping back to 2. above. If it is invalid, they abort thread (b).
  8. (b) The leeches request the Nth row (from the root) of the transaction Merkle tree, where N might typically be between 2 and 10. That corresponds to about 1/4th to 1/1024th of the transactions. The leeches also request a bitfield indicating which of the Merkle nodes the seed has leaves for. The seed supplies this (0xFFFF...).
  9. (b) The leeches calculate all parent node hashes in the Merkle tree, and verify that the root hash is as described in the header.
  10. The leeches search their Merkle hash cache to see if they have the leaves (transaction hashes and/or transactions) for that node already.
  11. The leeches send a bitfield request to the node indicating which Merkle nodes they want the leaves for.
  12. The seed responds by sending leaves (either txn hashes or full transactions, depending on benchmark results) to the leeches in whatever order it decides is optimal for the network.
  13. The leeches verify that the leaves hash into the ancestor node hashes that they already have.
  14. The leeches begin sharing leaves with each other.
  15. If the leaves are txn hashes, they check their cache for the actual transactions. If they are missing it, they request the txns with a getdata, or all of the txns they're missing (as a list) with a few batch getdatas.
Features and benefits
The main feature of this algorithm is that a leech will begin to upload chunks of data as soon as it gets them and confirms both PoW and hash/data integrity instead of waiting for a fully copy with full verification.
Inefficient cases, and mitigations
This algorithm is more complicated than the existing algorithm, and won't always be better in performance.
Because more round trip messages are required for negotiating the Merkle tree transfers, it will perform worse in situations where the bandwidth to ping latency ratio is high relative to the blocksize.
Specifically, the minimum per-hop latency will likely be higher.
This might be mitigated by reducing the number of round-trip messages needed to set up the BlockTorrent by using larger and more complex inv-like and getdata-like messages that preemptively send some data (e.g. block headers).
This would trade off latency for bandwidth overhead from larger duplicated inv messages.
Depending on implementation quality, the latency for the smallest block size might be the same between algorithms, or it might be 300% higher for the torrent algorithm.
For small blocks (perhaps < 100 kB), the BlockTorrent algorithm will likely be slightly slower.
Sidebar from the OP: So maybe this would discourage certain miners (cough Dow cough) from mining blocks that aren't full enough:
Why is [BTCC] limiting their block size to under 750 all of a sudden?
https://np.reddit.com/Bitcoin/comments/486o1u/why_is_bttc_limiting_their_block_size_to_unde

For large blocks (e.g. 8 MB over 20 Mbps), I expect the BlockTorrent algo will likely be around an order of magnitude faster in the worst case (adversarial) scenarios, in which none of the block's transactions are in the caches.

One of the big benefits of the BlockTorrent algorithm is that it provides several obvious and straightforward points for bandwidth saving and optimization by caching transactions and reconstructing the transaction order.

Future work: possible further optimizations
A cooperating miner [could] pre-announce Merkle subtrees with some of the transactions they are planning on including in the final block.
Other miners who see those subtrees [could] compare the transactions in those subtrees to the transaction sets they are mining with, and can rearrange their block prototypes to use the same subtrees as much as possible.
In the case of public pools supporting the getblocktemplate protocol, it might be possible to build Merkle subtree caches without the pool's help by having one or more nodes just scraping their getblocktemplate results.
Even if some transactions are inserted or deleted, it [might] be possible to guess a lot of the tree based on the previous ordering.
Once a block header and the first few rows of the Merkle tree [had] been published, they [would] propagate through the whole network, at which time full nodes might even be able to guess parts of the tree by searching through their txn and Merkle node/subtree caches.
That might be fun to think about, but probably not effective due to O( n2 ) or worse scaling with transaction count.
Might be able to make it work if the whole network cooperates on it, but there are probably more important things to do.
Leveraging other features from BitTorrent
There are also a few other features of Bittorrent that would be useful here, like:
  • prioritizing uploads to different peers based on their upload capacity,
  • banning peers that submit data that doesn't hash to the right value.
Sidebar from the OP: Hmm...maybe that would be one way to deal with the DDoS-ing we're experiencing right now? I know the DDoSer is using a rotating list of proxies, but still it could be a quick-and-dirty way to mitigate against his attack.
DDoS started again. Have a nice day, guys :)
https://np.reddit.com/Bitcoin_Classic/comments/47zglz/ddos_started_again_have_a_nice_day_guys/d0gj13y
(It might be good if we could get Bram Cohen to help with the implementation.)
Using the existing BitTorrent algorithm as-is - versus tailoring a new algorithm optimized for Bitcoin
Another possible option would be to just treat the block as a file and literally Bittorrent it.
But I think that there should be enough benefits to integrating it with the existing bitcoin p2p connections and also with using bitcoind's transaction caches and Merkle tree caches to make a native implementation worthwhile.
Also, BitTorrent itself was designed to optimize more for bandwidth than for latency, so we will have slightly different goals and tradeoffs during implementation.
Concerns, possible attacks, mitigations, related work
One of the concerns that I initially had about this idea was that it would involve nodes forwarding unverified block data to other nodes.
At first, I thought this might be useful for a rogue miner or node who wanted to quickly waste the whole network's bandwidth.
However, in order to perform this attack, the rogue needs to construct a valid header with a valid PoW, but use a set of transactions that renders the block as a whole invalid in a manner that is difficult to detect without full verification.
However, it will be difficult to design such an attack so that the damage in bandwidth used has a greater value than the 240 exahashes (and 25.1 BTC opportunity cost) associated with creating a valid header.
Related work: IBLT (Invertible Bloom Lookup Tables)
As I understand it, the O(1) IBLT approach requires that blocks follow strict rules (yet to be fully defined) about the transaction ordering.
If these are not followed, then it turns into sending a list of txn hashes, and separately ensuring that all of the txns in the new block are already in the recipient's mempool.
When mempools are very dissimilar, the IBLT approach performance degrades heavily and performance becomes worse than simply sending the raw block.
This could occur if a node just joined the network, during chain reorgs, or due to malicious selfish miners.
Also, if the mempool has a lot more transactions than are included in the block, the false positive rate for detecting whether a transaction already exists in another node's mempool might get high for otherwise reasonable bucket counts/sizes.
With the BlockTorrent approach, the focus is on transmitting the list of hashes in a manner that propagates as quickly as possible while still allowing methods for reducing the total bandwidth needed.
Remark
The BlockTorrent algorithm does not really address how the actual transaction data will be obtained because, once the leech has the list of txn hashes, the standard Bitcoin p2p protocol can supply them in a parallelized and decentralized manner.
Thoughts?
-jtoomim
submitted by ydtm to btc [link] [comments]

Toomim Bros supports democracy and Bitcoin XT with ~350 TH/s

We just sent this email to our customers:
The future of bitcoin is determined by the people who use it, and especially by the people who mine it. Each hash calculated on a block is a vote for how the bitcoin network should operate. Up until now, we've effectively had a one-party system. It's a distributed democratic system that has heretofore been left unused. That has changed. We the miners now have the power to determine the future of Bitcoin.
If you are currently using one of our p2pool nodes or are considering doing so in the future, you have a decision to make. If not, feel free to disregard this message. If you use p2pool but don't care much, you can disregard this email as well, and we'll make the decision for you.
As some of you are aware, a version of Bitcoin XT with code to allow for a hard fork to increase the maximum bitcoin block size was released less than a day ago. You can read a description of Bitcoin XT and some of its motivations, Mike Hearn's Manifesto for the release, and Gavin Andresen's exhaustive list and rebuttal of counterarguments for bigger block sizes for more information on this debate. For news supporting the use of Bitcoin XT, you can check out reddit.com/bitcoinxt and voat.co/v/bitcoin. For news that censors anything promoting the use of Bitcoin XT and moderated to favor the status quo, you can check out reddit.com/Bitcoin.
Mining with Bitcoin XT will have no major differences from mining with Bitcoin Core unless 75% of the last 1000 blocks mined are mined with XT or a derivative. If this threshold is ever reached, then a waiting period of two weeks will start, after which XT nodes will start to accept (and possibly create) blocks up to 8 MB in size. The creation of a > 1 MB block will create a hard fork. There is no way to increase the block limit without a hard fork, and demand for bitcoin transactions will soon exceed what can be supplied with 1 MB blocks. Some people think that creating a hard fork like this could kill Bitcoin. I think this will save it.
Toomim Bros operates two p2pool nodes. We have switched one of those nodes to use Bitcoin XT, and we will leave the other one on Bitcoin Core. That way, our customers can choose whichever node they want in order to make their own vote for how Bitcoin should be run. Everyone is welcome to use these nodes, though you won't have great mining efficiency unless you're inside our datacenter.
So far, Bitcoin Core (and consequently all Bitcoin protocol) development has progressed on a consensus model. All of the main developers have to agree to a change in order for it to take effect. For any important change, there will always be some people who are unhappy about it, so consensus only works for making important changes in oligarchies, and even in oligarchies, it doesn't work well. We at Toomim Bros think that decisions about how Bitcoin should operate ought to be democratized. As such, we support the existence of Bitcoin XT and think that all miners should have the option to choose which version they wish to support and use.
Toomim Bros also thinks that 1 MB is way too small, and that the network can and should support larger blocks. We think that one goal for Bitcoin should be minimizing the real cost per transaction. Currently, that cost is supported primarily by the block subsidy, and runs at about 65 kWh per transaction. (300 MW mining network, 1.27 transactions per second.) That's about $4 per transaction. Increasing the transaction volume would not directly require any significant increase in the mining network power consumption, so it would reduce the real cost per transaction. Our facility spends about 80x as much on electricity as it does on internet connectivity. We do not think large blocks will present a large burden on small bitcoin miners like us.
As such, we are putting the miners owned by Toomim Bros (the company) and the Toomim brothers (the individuals) on the XT node.
We're running XT on 74.82.233.205:9334 (global IP) or 10.0.1.2:9332 (LAN IP), and Core on 74.82.233.205:9332 (global IP) or 10.0.0.1:9332 (LAN IP).
(We also will probably be changing our IP address for these nodes soon. I'll publish details about the new IP once we're ready to make those changes.)
Jonathan Toomim
Grand Poobah
Toomim Bros Bitcoin Mining Concern
We are a medium-sized (750 kW, soon to be 2.25 MW) bitcoin miner hosting company located in central Washington. Visit http://toom.im for more information on us.
I'm interested to see how our customers vote. Most of our customers don't use p2pool right now. It would make me happy if some of them decided to switch to p2pool in order to be able to vote for XT.
submitted by jtoomim to bitcoinxt [link] [comments]

Torrent-style new-block propagation on Merkle trees | Jonathan Toomim (Toomim Bros) | Sep 23 2015

Jonathan Toomim (Toomim Bros) on Sep 23 2015:
As I understand it, the current block propagation algorithm is this:
  1. A node mines a block.
  2. It notifies its peers that it has a new block with an inv. Typical nodes have 8 peers.
  3. The peers respond that they have not seen it, and request the block with getdata [hash].
  4. The node sends out the block in parallel to all 8 peers simultaneously. If the node's upstream bandwidth is limiting, then all peers will receive most of the block before any peer receives all of the block. The block is sent out as the small header followed by a list of transactions.
  5. Once a peer completes the download, it verifies the block, then enters step 2.
(If I'm missing anything, please let me know.)
The main problem with this algorithm is that it requires a peer to have the full block before it does any uploading to other peers in the p2p mesh. This slows down block propagation to O( p • log_p(n) ), where n is the number of peers in the mesh, and p is the number of peers transmitted to simultaneously.
It's like the Napster era of file-sharing. We can do much better than this. Bittorrent can be an example for us. Bittorrent splits the file to be shared into a bunch of chunks, and hashes each chunk. Downloaders (leeches) grab the list of hashes, then start requesting their peers for the chunks out-of-order. As each leech completes a chunk and verifies it against the hash, it begins to share those chunks with other leeches. Total propagation time for large files can be approximately equal to the transmission time for an FTP upload. Sometimes it's significantly slower, but often it's actually faster due to less bottlenecking on a single connection and better resistance to packet/connection loss. (This could be relevant for crossing the Chinese border, since the Great Firewall tends to produce random packet loss, especially on encrypted connections.)
Bitcoin uses a data structure for transactions with hashes built-in. We can use that in lieu of Bittorrent's file chunks.
A Bittorrent-inspired algorithm might be something like this:
  1. (Optional steps to build a Merkle cache; described later)
  2. A seed node mines a block.
  3. It notifies its peers that it has a new block with an extended version of inv.
  4. The leech peers request the block header.
  5. The seed sends the block header. The leech code path splits into two.
5(a). The leeches verify the block header, including the PoW. If the header is valid,
6(a). They notify their peers that they have a header for an unverified new block with an extended version of inv, looping back to 2. above. If it is invalid, they abort thread (b).
5(b). The leeches request the Nth row (from the root) of the transaction Merkle tree, where N might typically be between 2 and 10. That corresponds to about 1/4th to 1/1024th of the transactions. The leeches also request a bitfield indicating which of the Merkle nodes the seed has leaves for. The seed supplies this (0xFFFF...).
6(b). The leeches calculate all parent node hashes in the Merkle tree, and verify that the root hash is as described in the header.
  1. The leeches search their Merkle hash cache to see if they have the leaves (transaction hashes and/or transactions) for that node already.
  2. The leeches send a bitfield request to the node indicating which Merkle nodes they want the leaves for.
  3. The seed responds by sending leaves (either txn hashes or full transactions, depending on benchmark results) to the leeches in whatever order it decides is optimal for the network.
  4. The leeches verify that the leaves hash into the ancestor node hashes that they already have.
  5. The leeches begin sharing leaves with each other.
  6. If the leaves are txn hashes, they check their cache for the actual transactions. If they are missing it, they request the txns with a getdata, or all of the txns they're missing (as a list) with a few batch getdatas.
The main feature of this algorithm is that a leech will begin to upload chunks of data as soon as it gets them and confirms both PoW and hash/data integrity instead of waiting for a fully copy with full verification.
This algorithm is more complicated than the existing algorithm, and won't always be better in performance. Because more round trip messages are required for negotiating the Merkle tree transfers, it will perform worse in situations where the bandwidth to ping latency ratio is high relative to the blocksize. Specifically, the minimum per-hop latency will likely be higher. This might be mitigated by reducing the number of round-trip messages needed to set up the blocktorrent by using larger and more complex inv-like and getdata-like messages that preemptively send some data (e.g. block headers). This would trade off latency for bandwidth overhead from larger duplicated inv messages. Depending on implementation quality, the latency for the smallest block size might be the same between algorithms, or it might be 300% higher for the torrent algorithm. For small blocks (perhaps < 100 kB), the blocktorrent algorithm will likely be slightly slower. For large blocks (e.g. 8 MB over 20 Mbps), I expect the blocktorrent algo will likely be around an order of magnitude faster in the worst case (adversarial) scenarios, in which none of the block's transactions are in the caches.
One of the big benefits of the blocktorrent algorithm is that it provides several obvious and straightforward points for bandwidth saving and optimization by caching transactions and reconstructing the transaction order. A cooperating miner can pre-announce Merkle subtrees with some of the transactions they are planning on including in the final block. Other miners who see those subtrees can compare the transactions in those subtrees to the transaction sets they are mining with, and can rearrange their block prototypes to use the same subtrees as much as possible. In the case of public pools supporting the getblocktemplate protocol, it might be possible to build Merkle subtree caches without the pool's help by having one or more nodes just scraping their getblocktemplate results. Even if some transactions are inserted or deleted, it may be possible to guess a lot of the tree based on the previous ordering.
Once a block header and the first few rows of the Merkle tree have been published, they will propagate through the whole network, at which time full nodes might even be able to guess parts of the tree by searching through their txn and Merkle node/subtree caches. That might be fun to think about, but probably not effective due to O(n2) or worse scaling with transaction count. Might be able to make it work if the whole network cooperates on it, but there are probably more important things to do.
There are also a few other features of Bittorrent that would be useful here, like prioritizing uploads to different peers based on their upload capacity, and banning peers that submit data that doesn't hash to the right value. (It might be good if we could get Bram Cohen to help with the implementation.)
Another option is just to treat the block as a file and literally Bittorrent it, but I think that there should be enough benefits to integrating it with the existing bitcoin p2p connections and also with using bitcoind's transaction caches and Merkle tree caches to make a native implementation worthwhile. Also, Bittorrent itself was designed to optimize more for bandwidth than for latency, so we will have slightly different goals and tradeoffs during implementation.
One of the concerns that I initially had about this idea was that it would involve nodes forwarding unverified block data to other nodes. At first, I thought this might be useful for a rogue miner or node who wanted to quickly waste the whole network's bandwidth. However, in order to perform this attack, the rogue needs to construct a valid header with a valid PoW, but use a set of transactions that renders the block as a whole invalid in a manner that is difficult to detect without full verification. However, it will be difficult to design such an attack so that the damage in bandwidth used has a greater value than the 240 exahashes (and 25.1 BTC opportunity cost) associated with creating a valid header.
As I understand it, the O(1) IBLT approach requires that blocks follow strict rules (yet to be fully defined) about the transaction ordering. If these are not followed, then it turns into sending a list of txn hashes, and separately ensuring that all of the txns in the new block are already in the recipient's mempool. When mempools are very dissimilar, the IBLT approach performance degrades heavily and performance becomes worse than simply sending the raw block. This could occur if a node just joined the network, during chain reorgs, or due to malicious selfish miners. Also, if the mempool has a lot more transactions than are included in the block, the false positive rate for detecting whether a transaction already exists in another node's mempool might get high for otherwise reasonable bucket counts/sizes.
With the blocktorrent approach, the focus is on transmitting the list of hashes in a manner that propagates as quickly as possible while still allowing methods for reducing the total bandwidth needed. The blocktorrent algorithm does not really address how the actual transaction data will be obtained because, once the leech has the list of txn hashes, the standard Bitcoin p2p protocol can supply them in a parallelized and decentralized manner.
Thoughts?
-jtoomim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <[http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/2015092...[message truncated here by reddit bot]...
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011176.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Composite priority: combining fees and bitcoin-days into one number | Jonathan Toomim | Oct 28 2015

Jonathan Toomim on Oct 28 2015:
Assigning 5% of block space based on bitcoin-days destroyed (BDD) and the other 95% based on fees seems like a rather awkward approach to me. For one thing, it means two code paths in pretty much every procedure dealing with a constrained resource (e.g. mempool, CNB). This makes code harder two write, harder to maintain, and slower to execute. As a result, some people have proposed eliminating BDD priority altogether. I have another idea.
We can create and maintain a conversion rate between BDD and fees to create a composite priority metric. Then we just do compPrio = BDD * conversionRate + txFee.
How do we calculate conversionRate? We want the following equation to be true:
sum(fees) = sum(BDD) * conversionRate * BDDweight
So we sum up the mempool fees, and we sum up the mempool BDD. We get a policy statement from the command line for a relative weight of BDD vs fees (default 0.05), and then conversionRate = (summedFees / summedBDD) * BDDWeight.
As an optimization, rather than scanning over the whole mempool to calculate this, we can just store the sum and add or subtract from it each time a tx enters or leaves the mempool. In order to minimize drift (the BDD for a transaction changes over time), we recalculate the whole thing each time a new block is found.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20151027/bb663efd/attachment.sig>
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Octobe011622.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

️HUGE Bitcoin.com Giveaway, BCH TXs over $2 billion per ... Bitcoin Cash DAA Development meeting #2 Scaling Bitcoin - Day 1 - Morning Session Bitcoin Cash DAA Development meeting #3 BCH Coffeehouse: Improving the Difficulty Adjustment Algorithm

Toomim tested the platform between two On April 8, BCH developer Jonathan Toomim revealed how far he's come with the Xthinner block compression protocol. [email protected] Bitcoin Calculator; Bitcoin Analytics; Ethereum Community Enthralled Over Controversial ProgPoW Proposal . March 10, 2020 March 10, 2020 by Bits n Coins. The Ethereum community has actually been discussing a proposal called ProgPoW as lots of ETH supporters think the Programmatic Proof-of-Work function will modify the network substantially. The controversial subject has actually appeared to ... On April 8, BCH developer Jonathan Toomim revealed how far he’s come with the Xthinner block compression protocol. Toomim tested the platform between two Bitcoin ABC full nodes on the BCH mainRead More… Following the report, the really next day Bitcoin ABC released a blog site post that detailed it would be utilizing Jonathan Toomim’s Asert DAA. However, at the end of the statement, the lead designer of Bitcoin ABC, Amaury Séchet, detailed that the Infrastructure Funding Proposal (IFP) would be contributed to the next upgrade. The Bitcoin Cash developer Jonathan Toomim explained that we cannot predict the cryptocurrency’s price movements that will happen after the halving: “If the markets were rational, they would have already priced in the halving. However, the markets are dumb, reactionary, and whimsical, and I have no idea what they will do.”

[index] [13842] [40420] [38331] [28487] [42743] [2518] [48] [33658] [33078] [34248]

️HUGE Bitcoin.com Giveaway, BCH TXs over $2 billion per ...

Bitcoin Cash Difficulty Adjustment Algorithm Development Meeting; discuss progress in developing, reviewing and implementing a new BCH DAA in the different implementations. July 27th 2020, 19:00 ... This video is unavailable. Watch Queue Queue. Watch Queue Queue Bitcoin ABC developers Amaury Sechet, Shammah Chancellor, Jason B. Cox and Antony Zegers are joined by Chris Pacia, Jonathan Toomim, Juan Garavaglia and Guillermo Paoletti to discuss Canonical ... Top Bitcoin Core Dev Greg Maxwell DevCore: Must watch talk on mining, block size, and more - Duration: 55:04. The Bitcoin Foundation Recommended for you Bitcoin Cash Difficulty Adjustment Algorithm Development meeting #3: Follow up of DAA meeting number 2: Discuss progress on development, implementation and specification. August 3rd, 2020, 19:00 ...

#