Why I Removed Floating Points From the Proof-of-Work Algorithm and Why and How I Plan to Add Them Back

2024-09-06 · Ryan X. Charles

The parallel proof-of-work (PoW) algorithm that runs on a GPU was giving inconsistent results between browser and server about 5% of the time. Although I had gone out of my way to use floating point operations that should theoretically be deterministic, that was not the case in practice. I had to remove floating points from the proof-of-work algorithm to make it deterministic. This makes almost no difference in practice at this time, because the primary computation is still a large integer matrix multiplication. The floating points added very little extra computation after the matrix multiplication had been performed.

Theoretically, both integer computation and floating point computations are standardized and deterministic across GPUs and CPUs. In practice, however, it is much harder to make floating point operations deterministic, because changing the order of operations can change the result. Because the tool I am using to perform computation, TensorFlow, does not guarantee the order of operations, it turns out that the floating point operations were in fact producing different results sometimes.

Although purely integer-based computations are adequate for now, there is a good reason to include some floating point calculations in the PoW algorithm. The goal is to target mainstream consumer devices and saturate the computation on the device, so that no one has an incentive to build an ASIC. Modern consumer devices have good GPUs, which perform numerical computation in parallel far faster than CPUs. Using modern consumer devices to their fullest means doing operations that are parallelizable and use both floating point and integer operations, because GPUs are designed to do both.

I plan to add back floating point operations later. The EarthBucks header includes two separate PoW algorithms: one for CPUs (serial) and one for GPUs (parallel), and they are designed to be changed with time as real-world conditions change. I also knew that it was unlikely I would be able to design an optimal algorithm at launch, so building in the ability to change the algorithm(s) ensures future flexibility. The way to add back floating points is most likely to drop the use of TensorFlow and start hand-coding the algorithms with WebGL, WebGPU, and CUDA. By hand-coding the algorithms instead of relying on a library (which is absolutely not intended for deterministic computation), I can ensure that the order of operations is deterministic. This will allow me to add back floating point operations to the PoW algorithm.

In summary, the PoW algorithm no longer has floating point operations, but still uses a large integer matrix multiplication which is by far the hardest part of the computation. The algorithm is designed to change with time, and there will be good reasons to add floating point operations back later, which I plan to do by hand-coding the algorithm in WebGL, WebGPU, and CUDA.


Earlier Blog Posts


Back to Blog

Home · About · Blog · Privacy · Terms
X · Telegram · Discord · reddit · GitHub
Copyright © 2024 Ryan X. Charles LLC