Scaling to 10GB Infinity Blocks: Avail’s Horizontal Scalability

How Avail achieves 10GB blocks without centralized infrastructure - a deep dive into the critical changes required to reach previously unimagined scale for decentralized blockchain technology.

By Prabal Banerjee 5 min read
Scaling to 10GB Infinity Blocks: Avail’s Horizontal Scalability

Breakthrough scalability is fundamental to unlocking the next wave of blockchain applications and unifying today’s fragmented ecosystem. Avail’s mission is to empower developers to build high throughput blockchain apps that scale horizontally and interoperate effortlessly, backed by the security of multiple crypto-economic assets, all without relying on any centralizing infrastructure.

The only way to achieve this at any meaningful scale is via a horizontally scalable base layer where multiple blockchains can connect on top with seamless interoperability. Enabling this requires the throughput of Avail to scale exponentially from where it is today (4MB/block) up to 10GB/block, and reducing the block time from 20sec to 600ms. 

This article kicks off a series diving into Avail’s ambitious scalability roadmap and the technical transformations that are underway. At the heart of this is maintaining permissionless access and cryptographic decentralization for every node, ensuring every participant can access a truly open and decentralized network.

Where We Are Today (4MB)

To explain how Avail can reach 10GB blocks, we will be orienting ourselves around the three key phases that contribute to block time. Namely, block generation, block propagation and block verification. The block time on Avail is 20 seconds today, allowing ~6 seconds per phase. Once blocks are produced on Avail, Light Clients (LCs) can sample them immediately from consumer-grade hardware. Finality occurs after two blocks, around 40 seconds after production. However, LCs can still sample non-finalized blocks if needed, allowing them to achieve full-node equivalent guarantees regarding data availability without waiting for finalization.

Today, the time required for block generation and verification are almost identical. This is because block verification essentially repeats the process of block generation in order to check it’s correct.

The components in an Avail block are:

  1. Non-DA Transactions e.g. staking, AVAIL token balance transfers etc.
  2. DA Transaction Packaging arranging DA blobs in the block, setting the data matrix (how blobs are set in Avail blocks) and other operations.
  3. Generating Commitments computing the commitments require the most time, then generating the block header.

It takes between 200-400ms to generate the commitment based on standard node requirements. DA transaction packaging also takes a few hundred milliseconds, however the overarching point to note here is that, increasing block size in the short-term has no major bottlenecks.

Short-term Scaling, Achieving 2x, 4x, 8x Blocks

A simple way of increasing block size is to pack more blobs into the same block. This is the approach used to scale from 2MB to 4MB earlier this year, and will be the approach we take in the short-term to increase block size further. 

We have tested 8MB and 16MB, which will begin rolling out on testnet and mainnet.

There are other important factors to consider, as the increase in block size needs to be reflected elsewhere in the network, including how it impacts the light client p2p network performance, hardware wallet updates and so on. 

Reaching The Inflection Point: 10GB Blocks

The current scaling methodology will hit upper bounds eventually, requiring a fundamentally new approach to reach our 10GB target. 

With Avail’s current architecture, one or more of the following will happen eventually;

  • Block generation will be more than ⅓ of the total block time, ~6 secs.
  • Block verification will be more than ~6 secs.
  • Block propagation will be more than ~6 secs.

Reducing the block time in order to further increase throughput would only exacerbate the above issues, making it an unfeasible path to reach 10GB blocks and a 600ms block time.

Here, we propose a 3-step plan to achieve our north star for scaling Avail and achieving 10GB blocks, and a 600ms block time.

Step 1: Optimize Commitment Generation

Today, block producers compute commitments for rows of the data matrix. Blobs submitted can fill partial rows, stretch across multiple rows, requiring the appID to be tracked at the cell level and creating other minor but important complications when breaking free of the current scaling limitations. Instead, we will arrange the data matrix, such that blobs contain both the data blob and the commitment for that blob.

This change allows nodes (block producers, but also RPCs) to check the commitment right after the transaction has been submitted to the mempool. Because data blobs have no state access and therefore do not have state contention, almost all sanity checks on the data blob can be done at the mempool level.

With this change in place, block production simply requires block producers to import blobs and their commitments, ordering them by appID and create the header. It’s worth noting this change does not introduce new security assumptions, as the node relies on its own mempool sanity checks.

We have now removed the commitment generation step from the block generation path, essentially removing the bottleneck we discussed earlier.

Step 2: Delegate Compute

Here, we utilize the power of DAS and cryptographic commitments unique to Avail

Suppose that every DA transactions is just a commitment, and the mempools receive and gossip commitments only. In addition, they sample data behind the commitments to gain confidence of the data’s availability. This is essentially what Avail’s Light Clients (LCs) do today.

With this approach, we essentially assume every node is an LC, sampling and verifying submitted commitments and agreeing on the order of commitments. 

Then, block generation will include commitments from the mempool which the block producer has sampled. For block verification, validators will only accept and sign a block if it contains commitments they have also independently sampled and gained high confidence for. Then, block propagation simply requires header propagation after block production.

This design drastically reduces both the computational overhead and bandwidth requirements for the entire network. It’s worth noting that although no single node necessarily has all the data, all the data will be available in aggregate across the network. Any node or user can always download any data they want directly from the p2p network.

Step 3: SNARKify Non-DA Activity

As noted above, Avail also contains non-DA activity e.g. balance transfers, staking etc. Today these transactions are re-executed by all full nodes per the monolithic construct we are moving away from. By removing this re-execution step from the equation, we can bring the horizontal scalability vision to life for blockchains at an unimaginable scale, while retaining cryptographically verifiable decentralization.

This is because:

  • LCs (and therefore every other node) can assert DA correctness for any fork of the chain, in a trust-minimized way, via DAS.
  • LCs rely on validator consensus to know which order to follow, aka the fork choice rule.
  • LCs are currently unable to verify the correctness of non-DA transactions without re-executing them.

Therefore, we must SNARKify all non-DA txns so that LCs can independently verify them. There are many ways to do this in theory. However, in practice, multiple trade-offs will need to be considered and addressed. 

What’s Next?

Next in the series, we will look at the upgrades to the LC to enable the Avail Stack to support decentralized applications at this scale. If you are interested in pursuing aspects of this development path in more detail, either with regard to research, development or testing, please get in touch via this form.