Author: EQ Labs Source: equilibrium Translation: Shanolba, Golden Finance
Our Privacy Series Part 1 introduced the meaning of “privacy”, how privacy in the blockchain network differs from web2 privacy, and why privacy is difficult to achieve in the blockchain.
The main point of this article is that if the ideal end state is to have a privacy infrastructure with Programmability that can handle shared private states without any single point of failure, then all roads lead to MPC. We will also discuss the maturity of MPC and its trust assumptions, emphasize alternative approaches, compare trade-offs, and provide an industry overview.
The existing privacy infrastructure in the Block chain is designed to handle very specific use cases, such as private payments or voting. This is a rather narrow view that mainly reflects the current uses of Block chain (transactions, transfers, and speculation). As Tom Walpo said - Cryptocurrency suffers from the Fermi Paradox:
In addition to increasing personal freedom, we believe privacy is a prerequisite for expanding the design space of the Block chain beyond current speculative metadata. Many applications require some private state and/or hidden logic to function properly:
Empirical analysis (from web2 and web3) shows that most users are not willing to pay extra for increased privacy or skip extra steps, and we also agree that privacy itself is not a selling point. However, it does enable new and (hopefully) more meaningful use cases to exist on the blockchain - let’s get rid of the Fermi paradox.
Enhanced Privacy Technology (PET) and modern cryptographic solutions* (“Programmability Passwords”) are the fundamental building blocks for achieving this vision* (for information on available solutions and their trade-offs, please refer to the appendix*).*
Our view on the development of blockchain privacy infrastructure is determined by three key assumptions:
The End of Privacy Infrastructure
Considering the above assumptions, what is the ultimate goal of Block chain’s privacy infrastructure? Is there a method that is suitable for all applications? Is there a privacy-enhancing technology that can govern all applications?
Not entirely. All of these involve different trade-offs, and we’ve seen them combined in various ways. Overall, we’ve identified 11 different approaches.
Currently, the two most popular methods for building privacy infrastructure in blockchain are using ZKP or FHE. However, both have fundamental flaws:
If the ideal end state is to have a privacy infrastructure with programmability that can handle shared private states without any single point of failure, then both roads can lead to MPC:
Please note that although these two methods will eventually converge, the uses of MPC are different:
Although the discussion has begun to shift towards more detailed perspectives, the guarantees behind these different approaches have not yet been fully explored. Given that our trust assumptions boil down to the assumption of MPC, the three key questions that need to be raised are:
How strong is the privacy protection provided by the MPC protocol in the Blockchain? 2. Is the technology mature enough? If not, what is the bottleneck? 3. Considering the strength of the guarantee and the cost it introduces, does it make sense compared to other methods?
Let’s answer these questions in more detail.
Whenever FHE is used in a solution, people always need to ask, “Who owns the decryption Secret Key?”. If the answer is “network”, then the subsequent question is, “Which real entities make up this network?”. The latter question is relevant to all use cases that in some form rely on MPC.
*Source: *Zama’s speech on ETH CC
The main risk of MPC is collusion, where enough participants collude maliciously to decrypt data or steal computations. Collusion can be reached off-chain and only exposed when malicious parties take certain obvious actions (such as blackmail or minting tokens out of thin air). Undoubtedly, this has a significant impact on the privacy protection that the system can provide. The risk of collusion depends on:
TLDR: Not as powerful as we hoped, but stronger than relying on a centralized third party.
The threshold required for decryption depends largely on the chosen MPC scheme - a trade-off between the degree of activity*(“guaranteed output delivery”)* and security. You can adopt a very secure N/N scheme, but it will crash as long as one Node is offline. On the other hand, the N/2 or N/3 scheme is more robust, but the risk of collusion is higher.
The two conditions that need to be balanced are:
The selected schemes vary in implementation. For example, Zama’s goal is N/3, while Arcium is currently implementing an N/N scheme, but will also support schemes with higher activity guarantees (and larger trust assumptions) later.
At this juncture, a compromise solution is to adopt a hybrid approach:
While this is theoretically attractive, it also introduces additional complexity, such as how the calculation committee interacts with the high trust committee.
Another way to enhance security is to run MPC in trusted hardware to securely share and store the Secret Key in a secure area. This makes it more difficult to extract or use the shared Secret Key for any operation beyond protocol definition. At least Zama and Arcium are exploring the TEE perspective.
More subtle risks include edge cases such as social engineering, for example, all companies in the MPC cluster employ a senior engineer for more than 10 to 15 years.
From a performance perspective, the key challenge faced by MPC is communication overhead. It rises with the complexity of computation and the increase in the number of Nodes in the network (requiring more back-and-forth communication). For Block chain use cases, this will have two practical impacts:
*Source: *Zama’s speech on ETH CC
2.
3. Operator set with permission: In most cases, the operator set is licensed. This means that we rely on reputation and legal contracts rather than economic or encryption security. The main challenge of the unlicensed operator set is the inability to know whether people are colluding off-chain. In addition, it requires regular guidance or reassignment of key shares so that Nodes can dynamically enter/exit the network. Although the unlicensed operator set is the ultimate goal and research is underway on how to expand the PoS mechanism to achieve threshold MPC (e.g. Zama), the licensed route seems to be the best way forward at present.
A comprehensive privacy package includes:
This is very complicated, introducing many unexplored extreme cases, with high costs, and may not be practically achievable for many years to come. Another risk is that people may develop a false sense of security by superimposing multiple complex concepts. The more complexity and trust assumptions we add, the harder it is to infer the overall security of the solution.
Is it worth it? Maybe it is worth it, but it is also worth exploring other methods that may bring significantly better computational efficiency, while privacy guarantees may be slightly weaker. As Lyron from Seismic pointed out - we should focus on the simplest solution that meets our standards for the desired level of privacy and acceptable trade-offs, rather than overdesigning it just for the sake of it.
If both ZK and FHE eventually return to the trust assumptions of MPC, why not use MPC directly for computation? This is a valid question and also what teams such as Arcium, SodaLabs (using Coti v2), Taceo, Nillion, among others, are attempting to do. Note that there are various forms of MPC, but in the three main methods, we refer here to protocols based on secret sharing and garbled circuits (GC), as opposed to FHE-based protocols that use MPC for decryption.
Although MPC has been used for distributed signature and more secure Wallet and other simple calculations, the main challenge of using MPC for more general calculations is communication overhead (which increases with the complexity of the calculation and the number of Nodes involved).
There are ways to reduce costs, such as performing pre-processing offline in advance (i.e., the most expensive part of the protocol) - Arcium and SodaLabs are exploring this. Then execute the calculation in the online phase, which will consume some of the data generated in the offline phase. This greatly reduces the overall communication overhead.
The table below from SodaLabs shows the initial benchmark* in microseconds for executing different Operation Codes 1,000 times in its gcEVM. *While this is a step in the right direction, there is still a lot of work to be done to improve efficiency and expand the operator set beyond several Nodes.
Source: SodaLabs
The advantage of the ZK-based approach is that you only use MPC for use cases that require computations in a shared private state. The competition between FHE and MPC is more direct and heavily relies on hardware acceleration.
Recently, people’s interest in TEE has been rekindled. It can be used independently (based on TEE’s private blockchain or coprocessor), or combined with other PET (such as ZK-based solutions) for computing with shared private state using TEE only.
Although TEE is more mature in some respects and introduces less performance overhead, they are not without drawbacks. First, TEE has a different trust assumption (1/N) and provides a hardware-based solution rather than a software-based one. The criticism often heard revolves around past vulnerabilities in SGX, but it is worth noting that TEE ≠ Intel SGX. TEE also requires trust in hardware providers, and hardware is expensive (beyond the reach of most people). One solution to mitigate the risk of physical attacks may be to run TEE in space to handle critical tasks.
Overall, TEE seems to be more suitable for proofs or use cases that only require short-term privacy (threshold decryption, dark ledger, etc.). For permanent or long-term privacy, security guarantees seem less attractive.
The intermediary can provide privacy to prevent other users from accessing it, but the privacy guarantee comes entirely from trust in third parties (single point of failure). Although it is similar to ‘web2 privacy’ (preventing the privacy of other users), it can be strengthened by additional guarantees (encryption or economy) and allows for verification of correct execution.
The Private Data Availability Committee (DAC) is an example; DAC members store data off-chain, and users trust them to store data correctly and perform state transition updates. Another form is the licensed sequencer proposed by Tom Walpo.
While this approach makes significant trade-offs in privacy, in terms of cost and performance, it may be the only viable alternative for low-value, high-performance applications (at least for now). For example, Lens Protocol plans to use a private DAC to achieve private information flow. For on-chain social use cases, the trade-off between privacy and cost/performance may currently be reasonable (considering the cost and overhead of alternatives).
The stealth Address can provide privacy guarantees similar to creating a new Address for each transaction, but the process is done automatically on the backend and is not visible to the user. For more information, see Vitalik’s overview or this article on different approaches. The main players in this field include Umbra and Fluidkey.
Although stealth Address provides a relatively simple solution, the main drawback is that they can only add privacy guarantees for transactions (payments and transfers), not for general computing. This sets them apart from the other three solutions mentioned above.
In addition, the privacy protection provided by Invisible Address is not as strong as other alternative solutions. Anonymity can be broken through simple clustering analysis, especially when the incoming and outgoing transfers are not within a similar range (for example, receiving $10,000 but spending an average of $10-100 per day). Another challenge of Invisible Address is upgrading the Secret Key, which now needs to be upgraded individually for each wallet (the aggregation of Secret Key storage can help solve this problem). From a user experience perspective, if the account does not have the fee Token (such as ETH), the Invisible Address protocol also requires account abstraction or the payee to pay the fee.
Given the rapid pace of development and the widespread uncertainty of various technical solutions, we believe that there are risks to the argument that MPC will be the ultimate solution. We may not ultimately need some form of MPC, primarily for the following reasons:
*Ultimately, the strength of the chain depends on its weakest link. In the case of programmable privacy infrastructure, if we want it to handle shared private states without a single point of failure, trust assurance comes down to the guarantee of MPC.
Although this article may sound critical of MPC, the fact is not. MPC has greatly improved the current reliance on centralized third parties. We believe the main issue is the false confidence in the entire industry, which has been covered up. Instead, we should confront the problem and focus on assessing potential risks.
However, not all problems require the use of the same tool to solve them. Although we consider MPC to be the ultimate goal, other methods are also viable choices as long as the cost of MPC-driven solutions remains high. We should always consider which approach is most suitable for the specific requirements/characteristics of the problem we are trying to solve, as well as the trade-offs we are willing to make.
Just because you have the best hammer in the world doesn’t mean everything is a nail.