Why Ethereum’s next step is zero-knowledge
The following is a guest post by Rob Viglione , CEO of Horizen Labs.
In the past year, there have been some major milestones along the Ethereum roadmap that have leveled up the network. EIP-4844 (aka Dencun) introduced blobs and proto-danksharding, making data storage an order of magnitude cheaper for Layer 2s and resulting in far lower transaction fees.
Meanwhile, Layer 2s (mostly of the optimistic variety) have become more integrated and widely used in applications, making it possible to transact for less than a penny, and improving on Ethereum’s fundamental infrastructure.
However, as anyone who has paid attention to gas fees will know, there is still too much congestion on Ethereum, and as the real-world usage of blockchains grows, more and more dApps will be competing for block space and computation.
It doesn’t take an engineer or cryptographer to know that this is unsustainable. We’ve seen what happens when Ethereum gets too congested. In some particularly high-surge moments, users have paid more than 2 ETH just to complete a transaction, and some of those transactions still failed as users scrambled to have them prioritized.
In a perfect world, we would move as much of that computation offchain, and still be able to post a succinct, verifiable proof that ensures the data is correct and in the right place.
Zero-knowledge proofs make this possible, but it’s still challenging for blockchains to verify transactions with so many potential possibilities in the EVM, and it can quickly become expensive to go this route. Zk-rollups have to pay for specialized hardware that creates a ZK proof via a prover, and then that typically needs to be converted into a proof type that Ethereum can understand.
In short, optimistic rollups are relatively easy and affordable to verify, while zk-rollups are challenging and expensive. For small and even medium-sized businesses that want to do some of their business onchain and keep it confidential, zk-rollups are the way to go, but proof verification can be a prohibitive expense.
Rollup ecosystems have their own interests
Up to this point, the branded L2s haven’t been interested in a modular proof verification solution like zkVerify — which can reduce verification costs by 90% or more. They could adopt it down the road, but it isn’t their focus at the moment. Generally, the big L2 ecosystems believe in verifying all of these ZK proofs on the same chain and amortizing these costs across users.
However, we did find an opportunity with rollup-as-a-service (RaaS) providers, because they believe in a modular approach to blockchains and tend to service small and medium-sized projects who can’t afford to pay those verification costs. For them, the idea of sending proofs to a standalone chain and then sending the proof verification back to Ethereum makes a lot of sense. Just like with modular data availability, we are now seeing RaaS providers adopt modular proof verification with open arms.
The big L2s have two main arguments against this approach: first, they believe it lessens the L2’s security to move proof verification to a different layer. In actuality, some of these L2s already verify their proofs offchain. They just don’t publicize that.
Their other argument is that they would prefer to aggregate proofs, by grouping a large batch of proofs together and essentially creating a “proof of proofs.” By doing that, the big L2s are able to spread the cost over a much larger number of transactions. However, they don’t seem so concerned that with this approach, it might take a few hours to aggregate hundreds of proofs, at a potentially higher cost.
Aggregation makes sense for a lot of use cases, but not necessarily for an application where you want to do something quickly and have it verified in the same amount of time.
At the end of the day, you still have to trust the L2 that you’re on.
In some ways, the EVM is stuck in 2017
As our team kept digging into the ZK space and Ethereum’s relationship with it, we discovered that Ethereum actually does have some compatibility with zero-knowledge elliptic curves using a precompile, which essentially makes it more efficient to handle the computation involved in verifying a proof. But the network currently only supports three mathematical operations on a single curve.
What does this mean for users? Since some zk-SNARKs cannot be verified, it requires the proofs to be wrapped in a more friendly form (using the bn128 proof), which results in less efficiency, more room for error, and potentially higher costs. Ideally, developers should be able to choose the zk-SNARK that best fits their application, and not being able to do so means they have to compromise on quality.
Technically, it’s possible for Ethereum to adopt more advanced precompiles over time, but it can take years for them to be implemented. The last precompile was implemented in 2017, and there have been none since.
Why is that? A lack of demand? Is it actually not feasible to implement these on Ethereum? And even if the community is able to do so, would it still be inefficient to compute with these new precompiles on the EVM?
It’s not clear. But what is clear is that the EVM needs to be overhauled, and having ZK proofs verified onchain is still too costly for the average use case. After hardware, it’s the biggest expense when using a zk-rollup.
At Horizen Labs, we’re tackling this in two ways: by offering modular proof verification in the form of zkVerify, and building a fully EVM-compatible chain with support for the latest zero-knowledge precompiles.
For example, Horizen 2.0 is built on Substrate, which allows for forkless upgrades that are automatically applied right after a community vote. No work has to be done on the node side, and no hard fork is required.
Some teams will prefer staying within a dedicated ecosystem like Horizen 2.0, with its own tight-knit community and network effects. Others will choose to go the RaaS route to build their own custom rollup, and they’ll be able to enjoy the cost savings of offchain proof verification there as well.
There are multiple ways to evolve the EVM with ZK, but we believe it needs to happen before the next wave of adoption.