Since I launched the wallet on January 1st, so many things happened that I almost lost track. Here’s a summary:
The last point is what I want to describe today.
The current proof-of-work algorithm (PoW algo), called algo1627, partially (and not totally accurately) described here, and whose source code can be found in the GitHub repo, has a problem: not only must it be produced on a GPU, but it must be verified on a GPU. Running a GPU in production has proven to be very expensive and complex. Not only does it literally cost a lot of money to run one GPU (and, ideally, more than one), but the software to manage separate GPU instances requires a lot of time to build and manage.
Is there a way to make the PoW algo run on a GPU, but verify on a CPU?
I believe the answer is yes. I have produced a proof-of-concept in Rust of such a PoW algo.
The big difference with the new PoW algo is that there is a second round of proof-of-work inside the main proof-of-work algorithm. What used to be a giant 1627x1627 matrix multiplication is now only a 128x128 matrix multiplication. Each iteration now, instead of one big matrix multiplication, is many small matrix multiplications that must lead to a value starting with 11 bits, so that the total amount of computation per iteration is approximately the same as before. But verifying it requires only one small matrix multiplication.
Currently, I am re-implementing the Rust algorithm in WebGPU. When the new algorithm launches, only WebGPU-enabled browsers will be able to do mining. This is slightly worse than requiring WebGL, but because the current algorithm is so inefficient to verify that I had to start to require verification, the new algorithm will be a big improvement. Anyone can download a Chrome-based browser to use WebGPU. (Although it may not work on all devices.)
This is currently my top priority and I hope to have a preview and launch the new algorithm in a matter of days.
The next big issue, besides running a GPU in production, is that the mining API is too inefficient and requires that I run a far more beefy database instance than should theoretically be necessary. As I described in a recent blog post, I plan to change the mining API significantly.
Currently, the mining API requires a write to the database for every single swipe of the mining button. This is a huge problem because there is a huge number of swipes. Each “swipe” is extremely costly in terms of database usage, which has exploded the database cost by 10x and possibly as much as 100x.
The solution to this is to prepare one header with a recent Merkle tree frequently, and then sign that and send it to the miner for every swipe. This will require zero writes and even zero reads to the database per swipe. Only valid shares need to be written. This approach will dramatically decrease database usage.
The new mining API is a work-in-progress and I plan to finish it and launch it after the new PoW algo is in production.
I recently made a video on what it will take to get to EarthBucks 1.0. Here is a summary of the changes I plan to make:
I am working hard to make EarthBucks 1.0 a reality. I hope to have the new PoW algo and mining API in production in a matter of days. I will keep you updated on my progress. Thank you for your support!