Bitcoin skaliert – in Dimensionen, die nie zuvor ein ...

Subreddit Stats: programming top posts from 2019-10-22 to 2020-10-21 06:41 PDT

Period: 364.67 days
Submissions Comments
Total 1000 180545
Rate (per day) 2.74 491.84
Unique Redditors 629 34951
Combined Score 1178903 2688497

Top Submitters' Top Submissions

  1. 47468 points, 49 submissions: iamkeyur
    1. One Guy Ruined Hacktoberfest 2020 (3039 points, 584 comments)
    2. AWS forked my project and launched it as its own service (2956 points, 810 comments)
    3. Privacy analysis of Tiktok’s app and website (2858 points, 234 comments)
    4. 98.css – design system for building faithful recreations of Windows 98 UIs (2781 points, 318 comments)
    5. Microsoft demos language model that writes code based on signature and comment (2621 points, 614 comments)
    6. Why does HTML think “chucknorris” is a color? (2565 points, 531 comments)
    7. Windows 95 UI Design (2309 points, 665 comments)
    8. The Linux codebase has over 3k TODO comments, many from over a decade ago (2119 points, 369 comments)
    9. eBay is port scanning visitors to their website (1829 points, 236 comments)
    10. Using const/let instead of var can make JavaScript code run 10× slower in Webkit (1814 points, 525 comments)
  2. 44853 points, 28 submissions: speckz
    1. From August, Chrome will start blocking ads that consume 4MB of network data, 15 seconds of CPU usage in any 30 second period, or 60 seconds of total CPU usage (8434 points, 590 comments)
    2. How To Spot Toxic Software Jobs From Their Descriptions (6246 points, 1281 comments)
    3. A Facebook crawler was making 7M requests per day to my stupid website (2662 points, 426 comments)
    4. Apple, Your Developer Documentation is Garbage (2128 points, 432 comments)
    5. The code I’m still ashamed of (2016) (2105 points, 429 comments)
    6. Slack Is Fumbling Developers And The Rise Of Developer Discords (2095 points, 811 comments)
    7. The Chromium project finds that around 70% of our serious security bugs are memory safety problems. Our next major project is to prevent such bugs at source. (1959 points, 418 comments)
    8. Advice to Myself When Starting Out as a Software Developer (1934 points, 257 comments)
    9. Software patents are another kind of disease (1893 points, 419 comments)
    10. My favourite Git commit (1772 points, 206 comments)
  3. 35237 points, 28 submissions: whackri
    1. It is perfectly OK to only code at work, you can have a life too (6765 points, 756 comments)
    2. Kernighan's Law - Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. (5171 points, 437 comments)
    3. The entire Apollo 11 computer code that helped get us to the Moon is available on github. (3841 points, 433 comments)
    4. Raytracing - in Excel! (2478 points, 168 comments)
    5. Writing userspace USB drivers for abandoned devices (1689 points, 84 comments)
    6. Drum Machine in Excel (1609 points, 60 comments)
    7. fork() can fail: this is important (1591 points, 264 comments)
    8. Learn how computers add numbers and build a 4 bit adder circuit (1548 points, 66 comments)
    9. Heroes Of Might And Magic III engine written from scratch (open source, playable) (1453 points, 84 comments)
    10. Apollo Guidance Computer: Restoring the computer that put man on the Moon (1277 points, 47 comments)
  4. 14588 points, 11 submissions: pimterry
    1. I'm a software engineer going blind, how should I prepare? (4237 points, 351 comments)
    2. The 2038 problem is already affecting some systems (1988 points, 518 comments)
    3. TLDR pages: Simplified, community-driven man pages (1897 points, 182 comments)
    4. JetBrains Mono: A Typeface for Developers (1728 points, 456 comments)
    5. BlurHash: extremely compact representations of image placeholders (930 points, 159 comments)
    6. Let's Destroy C (855 points, 290 comments)
    7. Shared Cache is Going Away (833 points, 192 comments)
    8. XML is almost always misused (766 points, 538 comments)
    9. Wireshark has a new packet diagram view (688 points, 24 comments)
    10. fork() can fail: this is important (460 points, 299 comments)
  5. 14578 points, 9 submissions: magenta_placenta
    1. Trello handed over user's personal account to user's previous company (2962 points, 489 comments)
    2. Feds: IBM did discriminate against older workers in making layoffs - “Analysis shows it was primarily older workers (85.85%) in the total potential pool of those considered for layoff,” the EEOC wrote (2809 points, 509 comments)
    3. Stripe Workers Who Relocate Get $20,000 Bonus and a Pay Cut - Stripe Inc. plans to make a one-time payment of $20,000 to employees who opt to move out of San Francisco, New York or Seattle, but also cut their base salary by as much as 10% (2765 points, 989 comments)
    4. US court fully legalized website scraping and technically prohibited it - On September 9, the U.S. 9th circuit court of Appeals ruled that web scraping public sites does not violate the CFAA (Computer Fraud and Abuse Act) (2014 points, 327 comments)
    5. I Suspect many Task Deadlines are Designed to Force Engineers to Work for Free (1999 points, 553 comments)
    6. Intent to Deprecate and Freeze: The User-Agent string (1012 points, 271 comments)
    7. Contractor admits planting logic bombs in his software to ensure he’d get new work (399 points, 182 comments)
    8. AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning (396 points, 97 comments)
    9. Half of the websites using WebAssembly use it for malicious purposes - WebAssembly not that popular: Only 1,639 sites of the Top 1 Million use WebAssembly (222 points, 133 comments)
  6. 13750 points, 3 submissions: pedrovhb
    1. Bubble sort visualization (7218 points, 276 comments)
    2. Breadth-first search visualization (3874 points, 96 comments)
    3. Selection sort visualization (2658 points, 80 comments)
  7. 11833 points, 1 submission: flaming_bird
    1. 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code (11833 points, 956 comments)
  8. 11208 points, 10 submissions: PowerOfLove1985
    1. No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body (5975 points, 890 comments)
    2. Redesigning uBlock Origin (1184 points, 162 comments)
    3. Playing Around With The Fuchsia Operating System (696 points, 164 comments)
    4. Microsoft's underwater data centre resurfaces after two years (623 points, 199 comments)
    5. Microsoft Paint/Paintbrush in Javascript (490 points, 58 comments)
    6. GitHub shuts off access to Aurelia repository, citing trade sanctions (478 points, 81 comments)
    7. How 3D Game Rendering Works: Texturing (475 points, 22 comments)
    8. Simdjson: Parsing Gigabytes of JSON per Second (441 points, 90 comments)
    9. How 1500 bytes became the MTU of the internet (435 points, 60 comments)
    10. It’s OK for your open source library to be a bit shitty (411 points, 130 comments)
  9. 10635 points, 8 submissions: michalg82
    1. Turning animations to 60fps using AI (3449 points, 234 comments)
    2. Bug #1463112 “Cat sitting on keyboard crashes lightdm” (3150 points, 143 comments)
    3. Heroes Of Might And Magic III engine written from scratch (open source, playable) (1431 points, 172 comments)
    4. Vulkan is coming to Raspberry Pi: first triangle - Raspberry Pi (1318 points, 66 comments)
    5. An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU (431 points, 60 comments)
    6. Microsoft cancels GDC 2020 presence due to coronavirus concerns (Following Sony, Facebook, Kojima Productions, Epic Games, Unity, and more) (371 points, 52 comments)
    7. Moving from reCAPTCHA to hCaptcha - The Cloudflare Blog (278 points, 71 comments)
    8. How much of a genius-level move was using binary space partitioning in Doom? (207 points, 109 comments)
  10. 10106 points, 10 submissions: SerenityOS
    1. Someone suggested I should host my website on my own OS. For that we'll need a web server, so here's me building a basic web server in C++ for SerenityOS! (2269 points, 149 comments)
    2. I've been learning about OS security lately. Here's me making a local root exploit for SerenityOS, and then fixing the kernel bugs that made it possible! (1372 points, 87 comments)
    3. SerenityOS was hacked in a 36c3 CTF! (Exploit and write-up) (1236 points, 40 comments)
    4. One week ago, I started building a JavaScript engine for SerenityOS. Here’s me integrating it with the web browser and adding some simple API’s like alert()! (1169 points, 63 comments)
    5. Implementing macOS-style "purgeable memory" in my kernel. This technique is amazing and helps apps be better memory usage citizens! (1131 points, 113 comments)
    6. SerenityOS: The second year (900 points, 101 comments)
    7. Using my own C++ IDE to make a little program for decorating my webcam frame (571 points, 33 comments)
    8. This morning I ported git to SerenityOS. It took about an hour and some hacks, but it works! :D (547 points, 64 comments)
    9. Smarter C/C++ inlining with attribute((flatten)) (521 points, 118 comments)
    10. Introduction to SerenityOS GUI programming (390 points, 45 comments)

Top Commenters

  1. XANi_ (10753 points, 821 comments)
  2. dnew (7513 points, 641 comments)
  3. drysart (7479 points, 202 comments)
  4. MuonManLaserJab (6666 points, 233 comments)
  5. SanityInAnarchy (6331 points, 350 comments)
  6. AngularBeginner (6215 points, 59 comments)
  7. SerenityOS (5627 points, 128 comments)
  8. chucker23n (5465 points, 370 comments)
  9. IshKebab (4898 points, 393 comments)
  10. L3tum (4857 points, 199 comments)

Top Submissions

  1. 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code by flaming_bird (11833 points, 956 comments)
  2. hentAI: Detecting and removing censors with Deep Learning and Image Segmentation by 7cmStrangler (9621 points, 395 comments)
  3. US Politicians Want to Ban End-to-End Encryption by CarrotRobber (9427 points, 523 comments)
  4. From August, Chrome will start blocking ads that consume 4MB of network data, 15 seconds of CPU usage in any 30 second period, or 60 seconds of total CPU usage by speckz (8434 points, 590 comments)
  5. Mozilla: The Greatest Tech Company Left Behind by matthewpmacdonald (7566 points, 1087 comments)
  6. Bubble sort visualization by pedrovhb (7218 points, 276 comments)
  7. During lockdown my wife has been suffering mentally from pressure to stay at her desk 100% of the time otherwise after a few minutes her laptop locks and she is recorded as inactive. I wrote this small app to help her escape her desk by periodically moving the cursor. Hopefully it can help others. by silitbang6000 (7193 points, 855 comments)
  8. It is perfectly OK to only code at work, you can have a life too by whackri (6765 points, 756 comments)
  9. Blockchain, the amazing solution for almost nothing by imogenchampagne (6725 points, 1561 comments)
  10. Blockchain, the amazing solution for almost nothing by jessefrederik (6524 points, 1572 comments)

Top Comments

  1. 2975 points: deleted's comment in hentAI: Detecting and removing censors with Deep Learning and Image Segmentation
  2. 2772 points: I_DONT_LIE_MUCH's comment in 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code
  3. 2485 points: api's comment in Stripe Workers Who Relocate Get $20,000 Bonus and a Pay Cut - Stripe Inc. plans to make a one-time payment of $20,000 to employees who opt to move out of San Francisco, New York or Seattle, but also cut their base salary by as much as 10%
  4. 2484 points: a_false_vacuum's comment in Stack Overflow lays off 15%
  5. 2464 points: iloveparagon's comment in Google engineer breaks down the problems he uses when doing technical interviews. Lots of advice on algorithms and programming.
  6. 2384 points: why_not_both_bot's comment in During lockdown my wife has been suffering mentally from pressure to stay at her desk 100% of the time otherwise after a few minutes her laptop locks and she is recorded as inactive. I wrote this small app to help her escape her desk by periodically moving the cursor. Hopefully it can help others.
  7. 2293 points: ThatInternetGuy's comment in Iranian Maintainer refuses to merge code from Israeli Developer. Cites Iranian regulations.
  8. 2268 points: xequae's comment in I'm a software engineer going blind, how should I prepare?
  9. 2228 points: turniphat's comment in AWS forked my project and launched it as its own service
  10. 2149 points: Rami-Slicer's comment in 20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions in source code
Generated with BBoe's Subreddit Stats
submitted by flpezet to subreddit_stats [link] [comments]

Debunked: "Using Bitcoin (Cash) without a second layer is too inefficient, because the entire transaction history would have to be stored and synced by all of the nodes in the network. That would be like every user of email having to store every email that anyone had ever sent."

Once the latest transaction in a coin is buried under enough blocks, the spent transactions before it can be discarded to save disk space.
To facilitate this without breaking the block's hash, transactions are hashed in a Merkle Tree [7][2][5], with only the root included in the block's hash.
Old blocks can then be compacted by stubbing off branches of the tree. The interior hashes do not need to be stored.
A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year.
With computer systems typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory.
. . . [Users can] verify payments [using Simplified Payment Verification without] running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes. . .
While I don't think Bitcoin is practical for smaller micropayments right now, it will eventually be as storage and bandwidth costs continue to fall. If Bitcoin catches on on a big scale, it may already be the case by that time. Another way they can become more practical is if I implement client-only mode and the number of network nodes consolidates into a smaller number of professional server farms.
Gavin Andresen:
It is hard to tease out which problem people care about, because most people haven't thought much about the block size and confuse the current pain of downloading the chain initially (pretty easily fixed by getting the current UTXO set from somebody), the current pain of dedicating tens of gigabytes of disk space to the chain (fixed by pruning old, spent blocks and transactions), and slow block propagation times (fixed by improving the code and p2p protocol).
OP's late appendix: Not surprisingly there is a lot misdirected criticism and brigading going on in the comment section of this post. But if you study the arguments carefully you'll notice that none of them point to truly critical weak-points in any of the concepts mentioned above, as the critics speak of risks that would come from some extreme scenarios that the incentive structure of Bitcoin (Cash) already heavily disincentivizes.
submitted by fruitsofknowledge to btc [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:


Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Can we talk about sharding and decentralized scaling for Raiblocks?

This essay contains a healthy dose of math sprinkled with opinion, and I would be the first to admit that my math and personal opinions are sometimes wrong. The beauty of these forums is that it allows us to discuss topics in depth, and with enough group scrutiny we should arrive at the truth. I'm actually a cryptocurrency noob; I've only been looking at it in earnest for a few months, but I've seen enough to conclude that we are in the middle of a revolution, and if I don't intellectually participate somehow, I think I'll regret it for the rest of my life.
Here I analyze sharding in a PoS (proof-of-stake) system, and I will show that not only is sharding good, but I will quantify just how beneficial it is to Tps (transactions per second of the whole network) and mps (messages per second processed by each individual node). I use Raiblocks as my point of departure, regarding it as both my inspiration and my object of critique. But much of the discussion should be relevant to any PoS sharded system.
As you may know, Raiblocks does not employ ledger sharding, but seeing as every wallet is already in its own separate blockchain, it's basically already half-way there! From an engineering perspective, sharding is low-hanging fruit for a block-lattice structure like Raiblock's, especially when you compare it to how complicated it is for single-blockchain currencies.
For the record, I think that Raiblocks will scale just fine according to the current strategy laid out by Colin LeMahieu (u/meor) . By using only full nodes and hosting them in enterprise grade servers (basically datacenters), chances are good that the network will be able to keep up with future Tps (transaction per second) growth. Skeptics have been questioning if people are going to be willing to run nodes pro bono, just to support the network. But I don't doubt that many vendors will jump at the chance. If I'm Amazon, and I've been paying 3% of everything to Visa all these years, when there's an option to basically run my own Visa, I take it.
Payment networks like Paypal have been offering free person-to-person payments for years, eating the costs of processing those transactions in exchange for the opportunity to take their cut when those same people pay online vendors like Amazon. This makes business sense because only a minority of transactions are person-to-person anyway. Most payments result from people buying stuff. So, in a sense, vendors like Amazon have already been subsidizing our free transactions for years. By running Raiblocks nodes, they would still be subsidizing our transactions, but it would be a better deal than what they were getting before.
But have we forgotten something here? Is this really the dream of the instant, universal, decentralized, uncensorable payment network that was promised and only kinda delivered by Bitcoin? Decentralization comes in a spectrum, and while this is certainly better than a private blockchain like Ripple, the future of Raiblocks that we're looking at is a smallish number of supernodes run by a consortium of corporations, governments, and maybe a sprinkling of die-hard fans.
You may ask, but what about the nodes run by you and me on our dinky home computers and cable modem connections? Well, people need to remember that Raiblocks nodes need to talk to each other every time there's a transaction, in order to exchange their votes. The more nodes there are, the more messages have to be received and sent per node per transaction. Having more nodes may improve the decentralization, redudancy, and robustness of the network, but speed it definitely does not. Sure, the SSD of a computer running a mock node will handle 7000 tps, but the real bottleneck is network IO, not disk IO, and how many Comcast internet plans are going to keep up with 7000 x N messages per second, where N is the total number of nodes? If you take the message size to be 260 bytes (credit to u/juanjux's packet-sniffing skills), and the number of nodes to be 1000, that's 1.8 GB/s. Also, if you consider that at least two messages will need to be exchanged with every node (one for the sending wallet, one for the receiving), the network requirements per node becomes 3.6 GB/s. This requirement applies to both the download and upload bandwidth, since in addition to receiving votes from other nodes, you have to announce your own vote to all of them as well. Maybe with multicasting upload requirements can be relaxed, but the overall story is the same: you almost want to convince small players not to run their own nodes, so N doesn't grow too large. Hence, the lack of dividends.
So, if we're resigned to running Raiblocks from corporate supernodes in the future, we might want to ask ourselves, why is decentralization so important anyway? For 99.9% of the cases, I actually think it won't matter. People just want their transactions to complete in a low-cost and timely fashion. And that's why I think Ripple and Raiblocks on their current trajectories have bright futures. They are the petty cash of the future. But for bulk wealth storage, you want decentralization because it makes it hard for any one entity to gain control over your money. No government will be able to step in and freeze your funds if you're Wikileaks or a political dissident when your cryptocurrency network is hosted on millions of computers scattered across the internet. I know the millions number sounds outlandish given that Bitcoin itself has fewer than 12k nodes at present, but that's my vision for the future. And I hope that by the end of this essay, you'll agree it's plausible.
The main benefit of sharding is that it allows nodes to divide the task of hosting the ledger into smaller chunks, reducing the per-node bandwidth requirements to achieve a certain Tps. I'll show that this benefit comes without having to sacrifice ledger redundancy, so long as sufficient nodes can be recruited. One disadvantage that must be noted is the increased overhead of coordinating a large number of nodes subscribed to partial ledgers. At the very least, nodes will need to know how wealthy other nodes are for voting purposes. However, I don't see how an up-to-the-second update of nodal wealth is necessary, since wealth changes on the timescale of months, if not years. It should be sufficient to conduct a role call once every few weeks to update nodes on who the other nodes are and to impart information about wealth and ledger subscriptions. Nonetheless, in principle this overhead means it is still possible to have too many nodes even with sharding.
Raiblocks has a unique advantage over single-chain cryptocoins in that each wallet address is already its own blockchain. This makes it especially amenable to sharding, since each wallet can already be thought of as its own shard! You just need a clever algorithm to decide which nodes subscribe to which wallets. For the purposes of this analysis, I assume a random subscription, so that for example if both you and I subscribe to 10% of the ledger, our subscriptions are probabilistically independent, and we intersect on roughly one percent of the total wallet space. I will also assume that all nodes are identical to each other in bandwidth, though in practice I think each node's owner should decide how much bandwidth he is willing to commit, letting the node's software dynamically adjust its P to maintain the desired bandwidth, where P, or the participation level, is the fraction of the ledger that the node is subscribed to. That way, when the Tps of the network increases over time, each node will use the increasing bandwidth demand as a feedback signal to automatically lower its ledger subscription percentage. Then, all that would be missing for smooth and seamless network growth is a mechanism for ensuring node count growth.
Some math
Symbol Definition
mps messages per second received/sent per individual node
N total number of nodes
Tps transactions per second processed by the whole network
R ledger redundancy
P fractional participation level of an individual node
k role call frequency
From the definitions, it should be apparent that
(1) R = NP
There are two types of messages that nodes have to deal with, transaction messages and role-call messages. Transaction messages are those related to updating the ledger when money is sent from one wallet to another. For each transaction, each node presiding over the sending wallet/shard will need to
  1. Broadcast its vote to the other R members of the shard. In the normal case this is a thumbs up signal and no conflict resolution is required.
  2. Receive votes from the other R members of the shard
  3. Broadcast its thumbs up to the R members of the receiving wallet/shard
Each node presiding over the receiving wallet/shard will need to
  1. receive thumbs up signals from the R members of the sending wallet/shard
Therefore, on a macro level upload and download requirements are the same. (Two messages sent, two messages received.)
Role-call messages are those related to disseminating an active directory of which nodes are participating in which wallets and information about nodal wealth. Knowledge about each individual node is broadcasted to the network at a rate of k. I think 10-6 Hz is reasonable, for an update interval of 12 days. For each update, all R nodes presiding over the wallet of the node whose information is being shared will broadcast their view of the node's wealth to all N nodes. Therefore, from the perspective on an individual node:
  1. The rate that role-call messages are received is kRN.
  2. The rate that role-call messages are sent is k(# node wallets presided over)N = k(NP)N = kRN.
Again, upload and download rates are the same. Since upload and download rates are symmetric (which intuitively should be true since every message that is sent needs to be received), the parameter mps can be used equally to describe upload and download bandwidth.
(2) mps = 2R(PTps) + kRN,
where the two terms correspond to the transaction and role-call messages, respectively. Using (1), (2) can be rewritten as
(3) mps = 2R2Tps/N + kRN
Here, we see an interesting relationship between the different message categories and the node count. For a fixed ledger redundancy R and Tps, the number of transaction messages is inversely proportional to the number of nodes. This is intuitive. If all of a sudden there are twice as many nodes and ledger redundancy remains the same, then each node has halved its ledger subscription and only has to deal with half as many transactions. This is the "many hands make light work" phenomenon in action. On the other hand, the number of role-call messages increases in proportion to the number of nodes. The interplay between these two factors determines the sweet spot where mps is at a local minimum. Since the calculus is straightforward, I'll leave it as an exercise to the reader to show that
(4) N_sweetspot = (2RTps/k)1/2
Alternatively, another way of looking things is to consider mps to be fixed. This may be more appropriate if each node is pegged at its committed bandwidth. Then (3) describes the relationship between the ledger redundancy and N. You may ask how this can be reconciled with (1), which seems to imply that N and R are directly proportional, but in this scenario each node is dynamically adjusting its ledger subscription P in response to a changing N to maintain a constant bandwidth mps. In this view, the sweet spot for N is where R is maximized. Interestingly, regardless of which view you take, you arrive at the same expression for the sweet spot (4).
If N < N_sweetspot, then transaction messages dominate the total message count. The system is in the transaction-heavy regime and needs more nodes to help carry the transaction load. If N > N_sweetspot (the node-heavy regime), transaction messages are low, but the number of role-call messages is large and it becomes expensive to keep the whole network in sync. When N = N_sweetspot, the two message categories occur at the same rate, which is easily verified by plugging (4) back into (3). This is when the network is at its most decentralized: message count per node is low while redundancy is high.
Note that N_sweetspot increases as Tps1/2. This implies that, as transaction rate increases, the network will not optimally scale without somehow attracting new people to run nodes. But the incentives can't be too good either, or N may increase beyond N_sweetspot. Ideally, a feedback mechanism using market forces will encourage the network to gravitate towards the sweet spot (more on this later).
One special case is where P=1 and N=R. This is when the network is at its most centralized operating point, with every single node acting as a full node. This minimizes node count for a given redundancy level R and is how Raiblocks is currently designed. I will show that for most real-world numbers, the role-call term is so small as to be negligible, but the mps is many orders of magnitude higher than in the decentralized case because of the large transaction term.
Assuming that we are able to keep the network operating at its sweet spot, by plugging (4) into (3), we arrive at
(5) mps_sweetspot = R3/2(8kTps)1/2
If instead we plug N=R into (3), we arrive at
(6) mps_centralized = 2RTps + kR2
So, we see that in the decentralized case the mps of individual nodes increases as the square root of Tps, a much more sustainable form of scaling than the linear relationship in the centralized case.
And now, the moment we've all been waiting for: plugging various network load scenarios into these formulas and comparing the most decentralized case to the most centralized. Real world operation will be somewhere in between these two extremes.
Fixed parameters
packet size (bytes) 260
k (Hz) 1.00E-06
R 1000
transaction fee ($) $0.01
0.1 1 10 100 1,000 10,000 100,000
Total monthly dividends $2,592 $25,920 $259,200 $2,592,000 $25,920,000 $259,200,000 $2,592,000,000
Decentralized node requirements
mps (Hz) 28 89 283 894 2,828 8,944 28,284
node traffic (bytes/s) 7.35E+03 2.33E+04 7.35E+04 2.33E+05 7.35E+05 2.33E+06 7.35E+06
N 1.41E+04 4.47E+04 1.41E+05 4.47E+05 1.41E+06 4.47E+06 1.41E+07
P 7.07E-02 2.24E-02 7.07E-03 2.24E-03 7.07E-04 2.24E-04 7.07E-05
Total Network Traffic (bytes/s) 1.04E+08 1.04E+09 1.04E+10 1.04E+11 1.04E+12 1.04E+13 1.04E+14
Yearly Network Traffic (bytes) 3.28E+15 3.28E+16 3.28E+17 3.28E+18 3.28E+19 3.28E+20 3.28E+21
Decentralized node income
monthly per node ($) $0.18 $0.58 $1.83 $5.80 $18.33 $57.96 $183.28
income/GB ($/GB) $0.0096 $0.0096 $0.0096 $0.0096 $0.0096 $0.0096 $0.0096
Centralized node requirements
mps (Hz) 2.01E+02 2.00E+03 2.00E+04 2.00E+05 2.00E+06 2.00E+07 2.00E+08
node traffic (bytes/s) 5.23E+04 5.20E+05 5.20E+06 5.20E+07 5.20E+08 5.20E+09 5.20E+10
N 1000 1000 1000 1000 1000 1000 1000
P 1 1 1 1 1 1 1
Total Network Traffic (bytes/s) 5.23E+07 5.20E+08 5.20E+09 5.20E+10 5.20E+11 5.20E+12 5.20E+13
Yearly Network Traffic (bytes) 1.65E+15 1.64E+16 1.64E+17 1.64E+18 1.64E+19 1.64E+20 1.64E+21
Centralized node income
monthly per node ($) $2.59 $25.92 $259.20 $2,592 $25,920 $259,200 $2,592,000
income/GB ($/GB) 0.0191 0.0192 0.0192 0.0192 0.0192 0.0192 0.0192
Yes, I did sneak a transaction fee in there, which is anathema to the Raiblocks way. But I wanted to incentivize people to run nodes. Observe that income per gigabyte remains the same, independent of network Tps, because both total income and total bandwidth scale proportionally to Tps. The decentralized case has half the income/GB because the role-call overhead doubles network activity. In either case, the income per GB depends on transaction fee and is independent of network load.
An interesting number to check online is the price/GB that various ISP's charge. With Google Fiber, it is possible to purchase bandwidth as low as $0.00076 per GB, meaning that it may be possible for nodes to be profitable even if fees were lowered by another order of magnitude. As time progresses, bandwidth costs will only go down, so fees may be able to be lowered even further past that. But because of electricity and other miscellaneous costs, I think a one cent transaction fee is probably pretty close to what people need to incentivize them to run nodes.
With sharding, even many home broadband connections today can feasibly support 100,000 transactions per second, with each node subscribed to about one ten thousandth of the total ledger and handling about 7 MB/s. Getting 14 million people to run nodes may seem like a tall order, but the financial incentives are there. Just look at all the people who have rushed to do GPU mining. Here, bandwidth replaces hashing power as the tool used for mining.
According to a study done by Cisco, yearly internet traffic is projected to reach 3.3 ZB by 2021. Looking at the table, that means if we ever reach 100,000 Tps, Sharded Raiblocks traffic would be equal to the rest of the world combined. Yikes! But if you think about it, nobody along the way is taking on an unbearable load. Users pay low fees for transactions. Nodes get dividends. ISPs get additional customers. The only ones who lose out are Visa, Paypal, and banks.
With such a large network presence, the cultural impact of this coin would be huge. That, in addition to the sheer number of participants running nodes as side businesses would cement this as the coin of the people.
From a macro level, I see no red flags that would indicate this is economically or technically infeasible. Of course, the devil's in the details so I'm posting this to see if people think I'm on the right track. To me, it seems that the possibilities are tantalizing and someone needs to build a test net to see if this idea flies (u/meor, if any of this sounds appealing, are you guys hiring? ;) ).
I've only scratched the surface and there are many other topics that are worthy of deeper discussion:
submitted by Cookiemole to RaiBlocks [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
Why aren't we as a community talking about Sharding as a scaling solution?
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

Bitcoin is just a protocol. The money part is an app. How will Bitcoin change data for the better?

It's not a whitepaper. Yet. Actually it's more of a train of thought...
Edit. By bitcoin I mean bch.
The Bitcoin blockchain as a method of storage
The Bitcoin network is a diverse and remarkable thing. Springing up from nowhere, it is now a global powerhouse with redundant nodes on in several countries on almost every continent (Antarctica soon?). Each modern node is a powerhouse server with Petahashes of energy sucking hash power, a server grade computer with some RAM and a few hard drives. This makes Bitcoin the most secure network in the world for financial transactions. There isn’t a single instance of someone’s money being compromised through anything but human factors. While this is great, I think it’s time to start thinking about what Bitcoin can really do.
Many of you may recall, it was commonplace to see files and information digitally encoded into Bitcoin transactions and sent up onto the blockchain. The user could transmit a bunch of 0 or 1 satoshi transactions to public addresses that were configured as plaintext, or in a file readable format that could be extracted somehow. The bitcoin whitepaper is stored there, available until the very last copy of the very last chain of Bitcoin in any form ceases to exist…
We use this amazing gift to send money. Not that that’s not important… After all, none of the rest can come without the money part coming first.
Imagine now being able to use the Bitcoin blockchain as a method of storage for anything digital. Storage of funds, storage of files, information, your wedding photo, photos of your kids, important events and just stuff that you want to remember. Eventually, maybe everything about us. Medical records. Deeds to land. Business relations. Social life, interests. Anything you want to keep private can be kept private, and anything you want to show to someone can be shared, with extreme granularity.
“What?” you say. “Madness”. Transactions cost money! You can’t send a 1MB file onto the blockchain every day! You'd need terabyte blocks! Well, I say let’s look at that. Currently blockspace is easy to buy at 1Sat/byte, with several miners already readily emptying blocks up to 8MB. This means that 1MB of blockspace is easily had for 3,000,000 sat ($80ish today I think?). This seems expensive at the moment, however when you read that people are promoting a move to prices of 10bytes/Sat, you can see where the price curve goes to pretty quick.
If you factor that there will be a 100,000 copies of the Bitcoin blockchain on 100,000 hard drives, what does that mean in terms of cost?
As per this article, hard drive prices are nudging $0.02/GB. That’s $0.00002/MB. This means that 100,000 1MB files spread over 100,000 hard drives would cost approximately $2 worth of hard drive space globally and maybe $1 in electricity forever. A miner currently gets $80 in fees alone to take that data, have a chance at mining it, and then store it indefinitely. The block reward is hundreds of times this amount. The indefinite storage requirement is the bond they pay to sit at the block reward roulette wheel. Currently the fee reward is currently skewed heavily towards the house which tells me that costs have a long way to go down, and should in fact continue to drop in line with hardware prices, once they catch up with the current reality (that it’s currently at least 1000% overpriced).
If you were to write software that could package data files as UTXOs and send them as a transaction, you could create something revolutionary. An immutable data record. It will exist for as long as Bitcoin. There’s no company to fail and lose your data. They can’t prevent you from ever accessing it. As long as you have your keys, you can see it, read it and use it.
With this, you could essentially create an immutable record of all of your information. A ledger of you. Photos, deeds, certificates, qualifications. Eventually as the price of storing your information forever on the blockchain drops, videos, large files, software releases, operating systems. Imagine knowing that on block 1,128,873,456 there was the one and only copy of Windows 2030 ever published by Microsoft.
The most amazing thing about Bitcoin is that the more we use it, and the more we challenge miners to do it better, store more, respond quickly to requests for information retrieval, the faster these capabilities will become possible. Start thinking of the Bitcoin blockchain as something malleable, that will be there forever, which you can control as you want, but which belongs to everyone.
Bitcoin miners have waay more hashpower than they need right now. We need to start ratcheting up our use of the blockchain and showing them they need to prepare for GB blocks in a year or two, and TB blocks in less than a decade. Some will whinge about it, but the real Bitcoin miners will step up to the plate. Mining pools who refuse to invest in storage will be left behind as the blockchain grows exponentially. This is amazing for the health of the blockchain. This challenges everyone to do it better, faster, cheaper than the next guy.
Remember, Bitcoin is just a protocol. The money part is an app.
submitted by The_Beer_Engineer to btc [link] [comments]

Review: Torguard vs Nordvpn vs Private Internet Access vs Vypervpn

This is a long review and I'm not a good writer but a summary is given first. Hopefully you will find the information helpful or at least it might give you some ideas about what you want from your VPN.
Summary Overall
All 4 vpns performed at roughly the same speed for me. VyperVPN is NOT suitable for torrenting. Vypervpn is aimed at business and their website, goldenfrog, makes a bid deal about respecting copyright AND they keep logs. Fortunately neither Torguard nor PIA (Private Internet Access) nor Nordvpn keep logs or are concerned about copyright. On the plus side vyper vpn was the only vpn I could use to post on 4chan and it has some good features. If you are NOT torrenting I think you would be happy with any of the four reviewed here choosing on whatever criteria matter to you. However if you are torrenting avoid VyperVPN and look at nordvpn, PIA or torguard. Personally I would go for PIA or Torguard as I dislike the lack of internet killswitch in NordVPN. However each VPN has their own features and benefits that you may or may not find useful.
Summary by category
I found when measuring real data downloaded each vpn was capable of "maxing out" my connection at 15-18 Mbps. If you have faster broadband your results may differ.
VyperVPN keeps logs and passes on dmca notices. PIA, Torgard and Nordvpn don't.
Desktop Apps
Vyper vpn is the best, Torguard and PIA are good. Nordvpn is a bit lacking.
Kill switch
They all offer internet kill switches apart from Nordvpn which only has the less useful app kill switches.
Special types of connection
Only PIA doesn't have at least one form of special server connection but the usefulness of these special connections is debatable. Nordvpn, Torguard and Vypervpn offer ddos protected servers ( sometimes it costs more ). That might be useful to online gamers/streamers. Torguard and Nordvpn offer dedicated IP addresses.
PIA and Torguard have apps Nordvpn and Vypervpn have openvpn config files.
Android apps
Nordvpns app was annoying to use. The others worked well. Extra features vary but you may not need any of these extras features.
Payment methods
Torguard and PIA have lots of ways to pay. Only Vypervpn doesn't currently accept bitcoin payment.
Vypervpn is the most expensive and PIA had the cheapest overall deal (at the time of writing).
Subjectively Vypervpn was least likely to banned from a website. Torguard and nordvpn the most likely.
NOTE: The vpns WILL change since this review ( early August 2016 ). Features will be added and prices will change. If something is important check the vpn website before you buy. I strongly recommend you take at least a month trial before you commit to a longer term cheaper deal.

In detail

Measuring "maximum speed" is very difficult if you don't have very fast broadband. Firstly and other "speed testing websites" were in my opinion useless. (Changing server on would get vastly different results with 1Mbps becoming 20Mbps at a different server. One server measured my vpn speed as double than of my maximum non-vpn speed - surely something wrong.)
I decided in order to measure consistent and reliable speed I would take rar files from my seedbox and upload them to megafileupload (I found megafileupload to be the fastest filehoster offering remote upload. Downloading from megafileupload was faster than using ftp direct from my seedbox). My broadband can download roughly about 45-55GB over a 6 and half hour night-time window. This equates to 15-18 Mbps. Here are the results I got in Gigabytes
  • NordVPN
33.2*1 44.7 44.8 46.6
  • Torguard
45.2 46.1 47.2 48.6 53.0
  • PIA
45.1 46.5 47.70 50.2 50.6 56.30
  • Vyper
45.9 48.6 49.5 52.5 53.3
( These figures are real data downloaded in GB over a 6.5 hour night NOT total data used by vpn - medium values are in bold)
(*1 I would ignore the lone poor result that Nordvpn got. My broadband and Filehoster are far from perfect. I had these vpns for 2 months and before I got very strict about time and measurement and Nordvpn performed as well as the others.)
Overall, allowing for a margin of error with both my internet and my filehoster I would call it a draw in practical terms.
( Note: You may get different results if you have either faster broadband or you are torrenting. I used the filehoster download method to get a comparable result between vpns. However my traffic was all download and had a few connections. Torrenting can involve hundered of connections of both downloading and uploading traffic. )
Here is what
has to say about them
Short version
First page of the long version
Here are interesting and relevant parts of their terms and conditions:
Each time a user connects to VyprVPN, we retain the following data for 30 days: the user's source IP address, the VyprVPN IP address used by the user, connection start and stop time and total number of bytes used.
Golden Frog takes copyright and other intellectual property rights very seriously. It is Golden Frog's policy to: Expeditiously block access to or remove content that it believes in good faith may contain material that infringes the copyrights of third parties and Remove and discontinue service to repeat offenders.
Because we do not log our users’ activities in order to protect and respect their privacy, we are unable to identify particular users that may be infringing the lawful copyrights of others. does not store or log any traffic or usage from its Virtual Private Network (VPN) or Proxy.
NordVPN does not monitor, store or record logs for any VPN user. We do not store connection time stamps, used bandwidth, traffic logs, IP addresses.
One user on vpnreview said they were issued with dmca notices when torrenting with vypervpn. I find this believable.
Windows Desktop Apps.
They all have the usual "windows start at start up", A choice of UDP or TCP packets and DNS leak protection options.
  • Vyper vpn - has automatically connect over untrusted wifi feature and if you pay extra you can get a NAT firewall.
  • Nordvpn - lacks any extras of note.
  • PIA - has "PIA mace" - blocks ads, trackers and malware although all this is available with browser add ons. Ipv6 leak protection and port forwarding. One irritation about PIA's app was that it was NOT digitally signed. I can't understand why they would not digitally sign their desktops apps.
  • Torguard - offers ipv6 leak protection, block outside dns and prevent webRTC leak options. It offers a proxy option and the option to execute scripts "before connect", "after connect" and "after disconnect" and has a high dpi scaling option.
The truth is a lot of these features can seem bewildering but won't really matter to a lot of people.
Kill switch - ( A kill switch is the apps important ability to stop either your entire internet or a selection of apps if the vpn disconnects. This prevents you accidentally torrenting/looking at naked people or whatever over a non vpn protected connection)
  • Vyper - internet kill switch only
  • PIA - internet kill switch only
  • Torguard - both internet kill and application kill switches.
  • Nordvpn - application kill swtiches only.( imo inferior *2 )
I found PIA and Vypervpn internet kill switches easier to implement than torguards.
(*2 Application killswtiches , which kill applications the same way as you would in task manager, are not as good as internet kill switches as one temporary disconnect can mean your torrent client/steam/download manager is killed and not coming back until you restart it manually. so Nordvpn loses here. Nordvpn lost for this guy too. )
Special Types of connection.
  • Vyper vpn offers chamleon connection which it claims bypasses efforts to detect vpn usage - this could be useful when trying to watch tv streaming in foreign countries and they claim it can help get you past censorship like in places like china.
  • Nordvpn offer DoubleVPN - your connection connects to one vpn server THEN another vpn server and then out to the internet. A fast streaming server - for watching television. Tor over vpn - Which I think connects from Vpn to tor then out to the internet. They also offer dedicated ip addresses for a fee if you "contact support".
  • Torguard offer stealth servers that they claim can get past "censorship, firewalls and Deep Packet Inspection" . They also offer dedicated IP address for $8 a month. Torguard also offers a 10Gbit Premium network for $20 a month although you shouldn't need that for a home connection.
  • PIA doesn't offer any type of special connection
It should be pointed out that more features does not mean better. For instance I could stream 1080p video with all the vpns making special streaming servers redundant. Double vpn is of questionable added value compared to other features. Tor over vpn might mean nordvpn could possibly see what you are putting into tor whilst just using tor alone would prevent that ( or you could use tor in a Virtual machine after connecting to a vpn on you main host machine).
DDos protection. Probably only of interest to some online gamers / video streamers.
  • VyperVpn claims to offer DDos proection although it seems you have to use the vpn pro option and get the NAT firewall for DDos protection.
  • Nordvpn offer some ddos protected servers as part of their regular subscription.
  • Torguard offer ddos protection but it is an expensive add-on ( $12 a month at the time of writing ).
  • PIA does not offer any form of ddos protection.
Number of simultaneous connections
  • Nordvpn 6
  • Torguard 5
  • Pia 5
  • VyprVPN Basic 2 - VyprVPN Pro 3 - VyprVPN Premier 5
( If you require a whole house hold connected look into the router configuration options )
Torguard and PIA have dedicated apps and openvpn config files while nordvpn and vypervpn have only openvpn config files. The apps are easier and arguably have better features.
Android apps
Nordvpn was poor requiring regular uninstall and reinstalls . Nordvpn blame a fault in android but all the other apps "just worked" on the same phone. Nordvpn and torguard don't have internet kill switches on their android apps. Vyper vpn has optional "trusted wifi" and optional malicious site blocking whilst pia has optional ad block, tracker blocking and malware blocking. Nordvpn offers it's dedicated streaming servers on their android client which would possibly be useful to tablet owners - although regular vpn servers were fast enough when I tried.
If you use tethering here is a side note on tethering and vpns: Tethering and using the android app was possible with all vpns except for torguard where tethered traffic did not get through when the android app was connected. With the remaining vpns if you use both the andoird phone app and you tether at the same time your tethered traffic is NOT protected by the android vpn ( ie You tethered traffic is treated as separate non vpn protected traffic - Fortunately all four vpns offer protection for more than one device so you can have both phone and tethered traffic protected ) .
PIA and Torguard have a good range of payment options. They all accept paypal. They all accept bitcoin except for Vypervpn.
Side note: I did experience payment problems with nordvpn. My second payment didn't seem to register and took days of back and forth with support to start working. Cancelling payment early bought a premature end to the service. I missed over a week of paid for service in the second month. However in fairness it's doubtful these would be an issue if you were a regular subscriber.
NOTE deals/offers are often available especially for long term subscriptions. However I would highly recommend taking out a one month subscription first to "see how things go" before taking out any cheaper long term subscription.
1 month - $ 7 6 months - $ 36 ( $6 a month ) 1 Year - $ 40 ( $3.33 a month ) 
1 month - $ 10 3 months - $ 20 ( $6.66 a month ) 6 months - $ 30 ( $5 a month ) 1 year - $ 60 ( $5 a month ) 
1 month - $ 8 6 months - $ 42 ( $7 a month ) 1 year - $ 69 ( $5.75 month ) 
1 month basic - $ 10 1 month pro - $ 15 1 month premier- $ 20 1 Year basic - $ 80 ( $6.66 a month ) 1 Year pro - $ 100 ( $8.33 a month ) 1 Year premier- $ 120 ( $10 a month ) 
as you can see PIA has the cheapest deal and Vypervpn is the most expensive.
( prices accurate as of 1st august 2016 - prices ending in 0.99 or 0.95 are rounded up to the nearest whole number for ease of readability)
Unfortunately all sorts of people abuse vpns for a variety of purposes. This means you may have to fill out captchas to gain access to a website ( especially cloudflare protected websites ) and google might warn you about "suspicious traffic" coming from your pc ( what they mean is your IP ). It's difficult to accurately produce an overall picture so the following is a mostly subjective assesment. I would say Vypervpn, which has it's own servers was the least likely to be blocked. I could even post on 4chan with vypervpn, the others were all autoblocked. Second based on limited experience was probably PIA ( which also has it's own servers ) - although some people on the PIA forum might disagree . Nordvpn and torguard who share a lot of the same server hosting ( redstation, iomart, dedicated server hosting ) were probably the worst. In fairness I've gone months without any problems on torguard and then had to reconnect or fill out captchas four or five times in a couple of days. You mileage will vary.
  • Vyper vpn is not suitable if you torrent. It's apps are good, easy to use and have some good features but it's expensive, keeps logs and has no linux client ( it does have openvpn config files). Vypervpn was the least likely to be blocked by a website for "abuse".
  • Nordvpn scores well on but a lot feels unfinnished. The android app can be a pain. The desktoip app has only app kill switches. There's no linux client ( it does have openvpn config files). It does offer novel special servers but apart from anti ddos for gamers/streamers I would question the usefulness of these servers.
  • Torguard may not have the slickest vpn software but they do specifically cater for torrenting ( tor in the name meaning torrenting ). It was generally no fuss dealing with them or their app. My biggest gripe was the internet killswitch was a bit difficult to setup.
  • Pia ticks all the boxes. It's suitable for torrenting has a good android and desktop app. It's the cheapest per month if you sign up for a year. I didn't like the lack of a digitally signature on their app and there are no frills ( ddos proection, anti-vpn circumvention) but it is cheap and functional.
Overall the most important thing I can emphasis is Try your vpn for a month or two before signing up for a discount deal. Which vpn suits you best would come down to which features matter to you. For most people speed, logging, price and perhaps internet kill switch would be the biggest considerations. If you are torrenting Vypervpn is a NO. I personally don't like the lack of internet killswitch (app only killswtich) in Nordvpn but the cheap ddos protection might be good for gamers. For me it would be between torguard and PIA. I think between those it would be as much personal taste as anything else.
Please note: This review was correct to the best of my ability as of early August 2016. Vpn providers are always trying to add features and improve service so it's worth checking things out for yourself. Thanks.
submitted by vpnhunter2014 to vpnreviews [link] [comments]

Bitcoin 80% Crash after the Halving! 80 Trillion Dollar Bitcoin Exit Plan - YouTube CRAZY BITCOIN CHART PREDICTS A 3 YEAR BULLRUN from NOW!!! Kostenlose BitCoins bekommen (Anleitung) +100% Auszahlung ... SprintPay Review [NEW CPU Mining Coin!]

Another notable change in the Bitcoin Cash network is the increase in the size of the default Blockchain data carrier from 80 bytes to 220 bytes. This allows a robust OP_Return function which is a relatively inexpensive way to integrate data in the BCH chain. Essentially, OP_Return is a script code that is used to mark transactions as invalid, but many cryptocurrency enthusiasts believe that ... Free Convert gigabyte (10^9 bytes) to CD (80 minute) Converter calculator in data storage units, gigabyte (10^9 bytes) to CD (80 minute) conversion table ... If you convert those hex bytes to Unicode, you get the string 3Nelson-Mandela.jpg?, representing the image filename. Similarly, the following addresses encode the data for the image. Thus, text, images, and other content can be stored in Bitcoin by using the right fake addresses. Secret message in the first Bitcoin block It is well known that the Genesis block, the very first block of data in ... Die Blockchain bläht sich auf, Bitcoin bricht schon deswegen irgendwann zusammen. Bereits jetzt liegt die Größe der Blockchain bei 13 Gigabyte und die Größe wird weiter wachsen. Bei der jetzigen Entwicklung wird die Blockchain in den nächsten Jahren bei über 100 Gigabyte liegen und irgendwann einen Terabyte groß sein. Dann braucht man ... Gigabyte to Bit Conversion Table. Gigabyte to Bit Bit to Gigabyte ; 0.01 Gigabyte [GB] = 85899345.920004 Bit [b]

[index] [9489] [41840] [39268] [49866] [5665] [27969] [51339] [9569] [39567] [35850]

Bitcoin 80% Crash after the Halving!

After the first Bitcoin Halving in November 2012 the price of Bitcoin crashed more than 80% a couple months later. How likely is such a Bitcoin dump after th... #Bitcoin #BTC #Crypto The possibility of a bitcoin dip of another 20% is very real. Do you think it will happen? Comment below and let me know what you think BTC price will do next. Cycles of ... GIGABYTE GA-Z270P-D3 LGA1151 ... 80+ PLATINUM 850W , Fully Modular Power Supply: 6GPU Mining rig Aluminum Stackable Mining RIG Case: Gigabyte ... Bitcoin - 80 Trillion Dollar Exit. I talk about how Bitcoin will eventually become an exit ramp from the crashing 80 trillion dollar financial system, the ec... Außerdem sehen wir uns eine BItcoin Analyse an und besprechen warum die Marktdominanz bei 80% liegen soll. Viel Spaß! Viel Spaß! Bedenkt immer: Du verdienst nirgendwo von jetzt auf gleich einen ...