Nakamoto Challenge // crypto and beyond problems

sbagency
7 min readSep 29, 2023

--

https://a16zcrypto.com/posts/announcement/introducing-the-nakamoto-challenge-addressing-the-toughest-problems-in-crypto/

The challenge is focused on 7 problems. Most problems can be solved by researching and developing new protocol(s) (DLT/HPC VM) that are able to eliminate these problems by design. Let’s dive in..

The Limits of Atomic Composability and Shared Sequencing

Problem statement: Perhaps the most-talked-about path to scaling blockchains today is deploying many rollups on Ethereum. While this bootstraps security and decentralization, deploying many disconnected rollups on Ethereum will fracture composability. We believe atomic composability — the ability to send a transaction A that finalizes if, and only if, transaction B is finalized — is crucial.

Please describe the limits of composability between rollups on Ethereum. Ideally a solution would propose a formal model of rollups and an impossibility result.

Rollup is a batch transaction (many transactions are packaged into one). If transaction A depends on B, that means A depends on rollup B.

Simple solution — don’t use rollups and ethereum. The actual problem is scalability and can be solved by designing new scalable DLT-protocol(s) with advanced security.

DePIN Verification

Problem Statement: Decentralized Physical Infrastructure Networks (DePIN) represent a class of blockchain applications dealing with physical infrastructure. Whereas smart contract platforms and payments can use classical consensus or validity proofs for trustless computation, DePIN projects often can’t due to scalability constraints and the oracle problem of verifying physical sensor data.

Current hardware-based approaches to verification include embedding a public/private key pair at the time of manufacture, or building custom hardware with a secure element like a trusted execution environment. Unfortunately, embedding a key pair means that only devices manufactured by certain parties can join the network, adding a level of permissioning, and trusted execution environments both require application-specific hardware and are often vulnerable to hacks.

As existing software approaches like consensus and validity proofs aren’t feasible, and existing hardware approaches have significant downsides, we’re excited about new potential software-based approaches to verification. Some projects have explored the idea of random sampling as a measurement method to ensure that rational participants in a DePIN network are behaving in accordance with the protocol.

The early outline of a random sampling approach to verification usually involves the network generating measurement requests to each provider/validator on the network. If the measurement request is correctly served, the provider receives a larger reward, akin to a block reward. As long as the provider can’t distinguish between a measurement request and a normal request, they are incentivized to correctly respond to each request.

Without verification, many DePIN networks fall victim to three common incentive challenges:

Self dealing occurs when providers in a DePIN network request services from themselves and receive a block reward or service payment from the network. If providers receive a larger payment than users make — often the case because of early subsidies or block rewards — then it’s profitable to buy service from yourself as a service provider.

Contending with lazy providers who commit to serving client requests but simply don’t respond, or respond with lower quality of service than they committed to.

A provider is willing to lose money to convince a client of a malicious response. Random sampling does the worst job addressing malicious providers as currently outlined, and does a much better job at ameliorating self dealing or laziness.

Great challenge! And of course it can’t be solved easily. But what if I told you that you need another network(s) to verify the first one. Here are some parallels with governance problems (DAO for instance). There is no ultimate solution to verify network from inside if it’s corrupted, verification can be faked. Cross-verification between networks has greater fraud prevention capabilities and greater resistance to attacks.

More details will be discussed later (Cross networks transaction protocol).

JOLT + Lasso Problem

SNARK virtual machines (VMs) enable highly-scalable decentralized computation, such as blockchains. Jolt is a new model for building SNARK VMs on top of Lasso, a fast lookup argument. We believe Jolt will be the most efficient way to build custom SNARK VMs in the near future. We released a sample implementation for Lasso earlier this year and are targeting a full release of Jolt later this year.

The efficiency of modern non-interactive proof systems are dependent on the efficiency of their polynomial commitment schemes. Lasso builds on a different lineage of SNARKs than the majority of those in production today. These sumcheck-based SNARKs depend on multilinear polynomial commitment schemes (PCS) rather than univariate. As a result, less analysis has been put into the efficiency properties (for both the prover and verifier) for multilinear polynomial commitment schemes. Section 2.2 of the Lasso paper briefly describes these different PCS.

Please expand on this section to provide a comprehensive analysis (theoretical and/or empirical) of 3–5 different polynomial commitment schemes and their cost in the context of verification for decentralized systems. We’re interested in both the cost profiles of the prover and verifier directly, as well as the cost of recursively verifying Jolt proofs within existing SNARK schemes, especially those with EVM-compatible verifiers.

Specifically, please detail:

Prover compute cost

Verifier compute cost

Proof size

Recursive verifier compute cost

Recursive verifier proof size

The problem with EVM is that it can’t run computationally heavy algorithms. The solution is to develop an HPC virtual machine (ZKP friendly), not to reduce the cost and size of proofs, this reduction may compromise security.

The HPC-VM details will be discussed later (GPGPU).

Compliant Programmable Privacy

While most smart contracts and blockchains today are fully transparent, we deeply believe privacy is essential for fully realizing blockchain’s potential as a social coordination tool in building decentralized networks. It’s become apparent that privacy is increasingly complex, and that private smart contracts or payments protocols may need to factor in some KYC, compliance, or illicit finance and sanctions screening features to enable users in different jurisdictions to participate and to limit developer exposure to legal risk. Current approaches include deposit delays, and deposit and withdrawal screening. Existing approaches are made even more complicated by fully-programmable smart contract platforms, where any developer can deploy their own bridge.

Please provide suggested compliance solution(s) to address illicit finance mitigation for privacy-enabling and programmable smart contract platforms. A solution should eliminate legal and regulatory risks to the greatest extent possible, while maintaining privacy and trustlessness.

Great challenge (most difficult), here are conflicting requirements. On the one hand, confidentiality and security, and on the other — compliance with regulatory requirements (verifiability).

There is no possibility to build a simple solution, especially in the VUCA (volatility, uncertainty, complexity, ambiguity) world, but combo of (ZKPs, KYC, cybersecurity best practices) could help.

Details can be discussed later.

Optimal LVR Mitigation

Loss vs. rebalancing (aka LVR and pronounced ‘Lever’) was proposed in a 2022 paper as a way of modeling adverse selection costs borne by liquidity providers to constant function market maker decentralized exchanges (CFMM DEXs). Current work is focused on finding an optimal way to mitigate LVR in DEXs without using a price oracle.

Please describe the potential mitigations to LVR and argue why your proposed solution is better than all known alternatives.

Price oracle is a point of failure (attack vector), the solution is protocol with ability to get external data and execute periodical transactions (aka cron-tasks) without external actors gives the ability to build MM DEX.

Designing the MEV Transaction Supply Chain

Assuming you could start from scratch, what is the optimal design of the miner extractable value (MEV) transaction supply chain? The process today is most naturally separated into distinct roles for searchers, builders, and proposers. What are the economic tradeoffs for maintaining these as separate roles versus having them consolidate? Are there new roles that would be beneficial to introduce? What are the optimal mechanisms to mediate how these different parties interact? Can the mechanisms mediating how the MEV supply chain functions be purely economic or are there components that require cryptographic solutions/trusted components?

The notion of what “optimal” means is intentionally left vague. Argue for what metrics are the most important when evaluating different mechanisms. Do we require strict collusion resistance between any groups of agents throughout the supply chain? Do we only require collusion resistance between agents at the same level of the supply chain? Is it enough that the mechanism’s properties hold in equilibrium or is it important that all parties have dominant strategies? On the other hand, what are lower bounds for how “optimal” the transaction supply chain can be? Are there certain conditions under which it is impossible to achieve all the “optimal” properties we might want?

This problem is left open to interpretation. Feel free to address any of the questions above or provide your own direction towards designing mechanisms for the transaction supply chain.

It’s better to build next generation protocol(s) without MEV-problems not to support them.

The solution: front-running prevention protocol, details later.

Leveraging Blockchain For Deepfake Protection

The rise of deepfakes (synthetic videos, photos, or audio recordings produced by artificial intelligence) that can convincingly replace a person’s likeness and voice — leading to potential misuse in misinformation campaigns, fraud, and other malicious activities — has been a common topic of conversation recently. While various deepfake detection methods are being researched, the challenge remains in providing a verifiable and trustless way to ensure the authenticity of digital content.

Blockchains and smart contracts present a promising avenue to counter this issue. By leveraging the immutable nature of blockchains and the automated execution of smart contracts, it’s possible to create a system that verifies and validates genuine content and differentiates it from tampered or deepfaked versions.

Your task is to devise a system that can enable viewers or platforms to verify the authenticity of videos, voice recordings, or photos. This may include reputation systems (reward or penalize based on the validation result, e.g., rewarding creators for genuine content or flagging tampered content) or it may not. Consider the scalability, privacy, and efficiency of your proposed system, especially when large video files are involved. Your solution should minimize computational and storage overheads and should be feasible for widespread adoption.

Key challenges include addressing the re-recording attack vector (if someone records a screen displaying a video, this secondary recording might bypass some naive authenticity checks) as well as allowing for legitimate changes (cropping, shortening videos).

To solve these problems we need new DLT-protocol(s) with HPC VM to process images, audio and video files.

New DLT/blockchain protocol with HPC VM

// details later

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet