ByAUJay
If I Migrate My Rollup From Single Proof to Continuous Aggregation, How Do I Keep Auditability and Instant Finality?
Executive context: why teams are migrating in 2025-2026
- Ethereum’s Pectra mainnet upgrade rolled out on May 7, 2025, and it came with some pretty cool features. It introduced BLS12‑381 precompiles (thanks to EIP‑2537), made adjustments to calldata pricing (EIP‑7623), and boosted blob throughput (EIP‑7691). These changes really shook things up in terms of costs and performance for verification and data availability. Check it out here.
- Then, on December 3, 2025, Fusaka took things a step further by activating PeerDAS (EIP‑7594) and rolling out “Blob-Parameter-Only” (BPO) forks. By December 9, the blob target/max had already jumped to 10/15, and then again to 14/21 by January 7, 2026. This expansion really helped increase the blob budget for rollups. You can read more here.
- The cool part? Verification is starting to separate from the EVM. Decentralized proof-verification layers, like Aligned AVS on EigenLayer, are batching and aggregating results and then attesting on Layer 1. This means they’re cutting down per-proof gas costs by an impressive 90-99% while ramping up throughput to thousands of proofs per second. More details can be found here.
- Lastly, the user experience for sequencing is leaning towards “instant” preconfirmations. OP Stack chains are adopting Flashbots’ setup (like Flashblocks and Rollup-Boost) to achieve confirmations in about 200 ms, all while maintaining Layer 1 settlement. It’s pretty neat--check it out here.
The bottom line is this: continuous aggregation--where you build a stack of proofs into a single, verifiable chain--allows you to prove more, spend less, and share updates less frequently. Just keep in mind, this works best if you prioritize auditability and user experience from the very beginning.
Definitions we’ll use
- Single-proof model: Every L2 batch or block comes with its very own L1-verified proof.
- Continuous aggregation: You generate proofs for smaller steps (like blocks or batches), then you fold them together into a rolling accumulator. From time to time, you settle on a single proof that confirms a continuous L2 range.
You can achieve this through techniques like folding or IVC (think Nova/HyperNova) or by using recursion in zkVMs (like SP1). Then, to make things even cleaner, you can optionally wrap it all up in a succinct outer proof for Layer 1. Check it out here: (eprint.iacr.org).
Goal 1 -- Preserve auditability after you stop posting one proof per batch
Auditability allows any third party to: (a) recreate L2 from the on-chain data, and (b) check that a submitted proof (whether it's aggregated or not) matches perfectly with the specified L2 block range and data.
Here’s the scoop on how to maintain that property when you're aggregating continuously.
- Create an “audit chain” for per-batch commitments in L1 events and within the recursive proof
- For every L2 block i, make sure to emit an L1 event that includes:
- batch_index i
- l2_block_hash, tx_data_commitment (like the Merkle/KZG root of transactions or state difference)
- a rolling accumulator root A_i = H(A_{i-1} || l2_block_hash || tx_root || i)
- Include A_start, A_end, and the exact [L2_start, L2_end] range in the public inputs of the aggregated proof. The verifier contract will keep a record of proven intervals and accumulator edges, so auditors can easily match the event logs to the proved range.
This gives auditors two ways to verify: (1) replay L2 from DA and recalculate A_end, or (2) perform spot checks to confirm the inclusion of any batch in the accumulator through the logged commitments.
2) Keep DA genuinely public and reconstructible
- Go for Ethereum blobs (EIP‑4844/PeerDAS) for your rollup data. Each blob is 128 KiB, and they stick around for about 18 days. PeerDAS helps you handle more throughput without putting too much strain on nodes, and BPO forks are already ramping up the target/max blobs well past 2025’s planned 6/9. Just remember to schedule your blob postings so that every batch’s data fits into L1 blobspace while you’re in your proving window. Check it out here: (eips.ethereum.org).
- With Pectra’s EIP‑7691, you've got a much higher daily max blob capacity. So, plan your pipeline around the 10/15 and 14/21 BPO steps for Q4 2025 to Q1 2026. More details can be found here: (eip.fun).
- If you’re leaning towards alt‑DA, make sure to set up a DA‑verifier/bridge that confirms availability on L1 during the challenge window, following L2BEAT’s advice. Otherwise, you might run into issues with auditability and Stage classification. Get more info here: (forum.l2beat.com).
3) Version Your Circuits and Verify Keys On-Chain
- Keep track of circuit commitments (which are basically the hashes of your proving and verifying keys) and link them to specific intervals they've been proven for. When you're ready to upgrade, trigger an upgrade event and set a grace period. It's smart to maintain the old path active until the new circuits provide overlapping coverage. This way, audits remain clear and straightforward even during upgrades.
- Reveal the “proven range” status for wallets, bridges, and indexers
- Introduce a Solidity API that includes:
function provenRange() returns (uint256 l2Start, uint256 l2End, bytes32 A_end)function isBatchProven(uint256 i) → bool
- This way, downstream systems can set up safety policies, like “only release funds when batch i falls within a proven range.”
5) Document and Test “Rebuild from L1” End-to-End
- Set up a source-available node that can reconstruct state strictly from L1 blobs and commitments, following L2BEAT’s guidelines. Your auditability relies on having this in place and functioning properly. Check it out here: l2beat.com
Practical Note on Cost
When it comes to verifying a Groth16 proof on L1, you're looking at around a few hundred thousand gas. On the other hand, STARK verifiers can really rack up the costs, sometimes hitting multiple million gas, depending on the scheme and parameters.
The cool part? Aggregation helps by cutting down on-chain checks to just one verification per aggregate. Plus, recursion is a neat trick that lets you handle batch verifications off-chain, making things a bit easier on the wallet.
For more details, check out this blog post.
Goal 2 -- Keep (or improve) “instant finality” for users
Users really want that instant “it’s done” feeling, even if the cryptographic finality takes a few minutes to kick in. With continuous aggregation, you can maintain that quick response using preconfirmations along with a solid settlement process.
Four Proven Patterns:
- The Hero's Journey: This classic pattern takes us on an adventure with an ordinary person who becomes a hero. They go through challenges, meet mentors, and ultimately return transformed. Think of Luke Skywalker or Frodo Baggins - their journeys resonate with audiences because we can relate to their struggles and triumphs.
- The Three-Act Structure: This is a timeless format that divides stories into three parts: setup, confrontation, and resolution. It's like a roadmap for storytelling, making it easier for writers to develop engaging narratives. Many hit movies and novels follow this structure, so it’s a safe bet!
- The Quest: Here, the protagonist sets off on a mission to achieve a specific goal. Along the way, they face obstacles and grow as individuals. This pattern is super popular in fantasy and adventure genres - think of stories like "Harry Potter" or "The Lord of the Rings."
- Overcoming the Monster: In this pattern, the main character battles against a great evil, whether that’s a literal monster or a metaphorical one. It’s all about the struggle between good and evil, and there's something universally appealing about seeing the hero rise up against the odds. Just look at "Beowulf" or "Jaws" for examples!
These patterns have stood the test of time because they connect with us on a deep level, reminding us of our own challenges and hopes.
A) Preconfirmations through decentralized sequencing or BFT committees
- OP Stack + Flashbots: The combo of “Flashblocks/Rollup‑Boost” is hitting around ~200 ms confirmations on Base/Unichain. Not only that, but it comes with verifiable ordering rules and TEE‑backed guarantees. Plus, this rollout is set to sweep across the entire Superchain. Check it out here: (coindesk.com)
- BFT on L2: With a committee-based consensus, you can get single-slot finality on L2, while proofs and aggregates can settle down the line. A cool example is ZKsync’s ChonkyBFT design. More details can be found here: (arxiv.org)
B) Preconfirmations authenticated by BLS and verified on-chain efficiently
- Thanks to Pectra’s EIP-2537 BLS12-381 precompiles, you can easily and affordably verify aggregate BLS signatures for preconfirmations or DA attestations right in Solidity--no need for any complicated big-integer tricks. This is perfect for those “soft finality” committees. (eips.ethereum.org)
C) Solid settlement with limited disputes
- Optimistic stacks: they're all about using permissionless fault proofs, which means that withdrawals and L1 settlements don’t have to depend on a multisig. OP Mainnet rolled out permissionless fault proofs back in June 2024, while Arbitrum launched its BoLD for permissionless validation on the mainnet in February 2025. Both developments significantly boost state-validation assurances under preconfirmation UX. (theblock.co)
D) Managing Rate-Limit Risk with Bonds and Slashing
- Secure your preconfirmations by using an economic bond that can be slashed if there's any equivocation or reorganization. If you're utilizing a verification layer or AVS (like Aligned), make the most of restaked security and combine attestations to enhance your low-latency user experience. Check out more about it here.
A reference migration blueprint (minimal downtime, maximum clarity)
Phase 0 -- Baseline and SLOs
- Let’s start by defining your SLOs: think about the maximum proof lag, how long it takes to prove a specific L2 range, the preconfirmation latency/error budget, how often you’ll post DA, and your rollback policy.
Phase 1 -- Dual-track Proving (Shadow Mode)
- Get the aggregator/recursive pipeline up and running off-chain. Continue posting single proofs like we do now, but at the same time, start generating rolling aggregates and share their commitments through L1 events. Make sure to keep comparing both continuously.
Phase 2 -- Verifier Contract Upgrade
- We need to add a couple of functions:
proveRange(proof, publicInputs)→ updates(l2Start, l2End, A_end)reportBatch(bytes32 batchCommit, uint256 i)→ emits per-batch commitments (as long as they haven’t been emitted elsewhere)
- Let’s tie verification to circuit version IDs and make sure to emit
VersionUpgradedevents.
Phase 3 -- DA Pipeline Hardening
- Time to switch all batch data to blobs (EIP‑4844). We’ll set up those
max_fee_per_blob_gasguards around Pectra’s EIP‑7691 and Fusaka BPO parameters. Keep an eye on the blob base fee and utilization, and make sure to calibrate it for that 6→9→15→21 cadence. Just a heads up: calldata fallback has gotten a bit more expensive after EIP‑7623, so let’s only use it as emergency relief. (blog.ethereum.org)
Phase 4 -- Preconfirmations Rollout
- Roll out those BLS-backed preconfirmation committees and get that on-chain verification going with EIP-2537 precompiles. Let’s also establish slashing conditions for double-signing or ordering mishaps. If we’re on the OP Stack, we should look into integrating Flashblocks/Rollup-Boost to hit around ~200 ms for user experience. Check out more details here.
Phase 5 -- Cutover and Deprecation
- Switch the settlement process to aggregated proofs at a defined L2 block boundary. Keep the single-proof path available as a backup for one challenge window, just in case. Make sure to announce the deprecation timeline to everyone publicly.
Phase 6 -- Auditability drills and external validation
- Let’s get a reproducible “state‑from‑L1” script out there and kick off monthly public drills! So, when we have L1 slot X, we’ll need to reconstruct L2 state root Y and accumulator A_end. We should align this with L2BEAT’s Stage framework, which means setting up challenge periods that last at least 7 days for those optimistic paths and making sure the replayers are source-available. You can read more about it here.
Concrete design details that prevent audit and UX regressions
- Proof public inputs
start_block,end_blockstart_accumulator,end_accumulator- DA epoch/slot range and KZG commitments for each batch segment
circuit_version_id
- Verifier contract storage
mapping(uint256 ⇒ bytes32) batchCommitRoot- struct
Range { uint64 start; uint64 end; bytes32 A_end; uint32 circuitVersion; } Range latest
- Events
event BatchCommitted(uint256 indexed i, bytes32 txRoot, bytes32 stateRoot, bytes32 accumulator)event RangeProven(uint256 start, uint256 end, bytes32 A_end, uint32 version)event CircuitVersionUpgraded(uint32 oldVersion, uint32 newVersion)
- Fallback semantics
- If the aggregator runs into some delays beyond the SLA, we can allow a single-batch proof to settle a small tail range for free exits, and then catch up with the recursion later.
Cost, throughput, and scheduling you can actually budget
- Verification costs
- For a single verifier using Groth16, you're looking at gas costs around O(10^5) to O(10^6), usually settling around 250k with the more common verifiers. On the flip side, STARK verifiers can run into the multi-million range. But here's the silver lining: with aggregation, you only need to do one on-chain verification per range. Check out more details here.
- DA costs
- With EIP‑7691 and PeerDAS now up and running, we should see a solid increase in blob capacity. The price response is kind of interesting too--it's asymmetric right now, with an increase of about +8.2% for a “full blob” block compared to a drop of around -14.5% when it's empty, which helps smooth out those price spikes. Just a heads-up: BPO steps up to 10/15 and 14/21 will be pushing daily capacity higher, so make sure to plan your batch sizes and posting intervals accordingly. For more info, check this out: eip.fun.
- Proof throughput
- Nowadays, modern zkVMs are all about that native recursion and aggregation. A report from SP1 Hypercube shows an impressive “<12 seconds for 93-99% of mainnet blocks” using regular GPU clusters. This setup really works well for continuous rollup proving and aggregation SLAs. Dive deeper into the findings here.
- Off‑chain verification layers
- If your main issue is verification rather than proving, then think about tapping into an off-chain verification AVS. They can handle thousands of proofs per second, batching them up and then sending just one aggregated result to L1, which cuts down on both the queueing latency and gas costs. You can read more about it here.
2026 Budget Guidelines
When you're planning for your 2026 budget, keep this in mind: in a steady state, blob posting and the single aggregated verifier call are usually the big hitters when it comes to marginal costs. You might find that the execution gas of the settlement transaction tends to take precedence over the blob base fee--this is especially true until we hit those short demand spikes.
Make sure to set your max_fee_per_blob_gas with a little extra room to accommodate BPO epochs. For more details, check out the full post on blog.ethereum.org.
Sequencer and governance considerations you shouldn’t skip
- Over time, it's key to decentralize sequencing. We've seen some shared-sequencer projects wrap up, like Astria shutting down in December 2025, but on the bright side, the OP-Stack+Flashbots infrastructure is rolling out at scale. When planning migrations, aim to avoid creating a single vendor chokepoint for your instant-finality path. (unchainedcrypto.com)
- Make sure your challenge and exit windows align with the Stage criteria--think at least 7 days for optimistic paths. It’s best not to lean on a Security Council for non-emergency liveness, and don’t forget to document the forced-inclusion user experience. These choices can really impact the risk evaluation. (forum.l2beat.com)
Example: converting a zkEVM rollup to continuous aggregation
- Prover Side
- Keep those block-level circuits intact, but let’s throw in a folding layer (like HyperNova/IVC) that can take in block proofs and update the accumulator.
- Every so often, we should SNARK-wrap the latest IVC state into a neat little L1-verifiable proof. We can set this up to happen based on a time frame or a size threshold--think every 5 to 10 minutes or after every N blocks.
- L1 Contracts
- Let’s add a
proveRange()function. This will check the succinct wrapper, update the [start, end] window, and store A_end. - Make sure to emit
BatchCommittedfor each batch, no matter what the settlement mode is.
- Let’s add a
- Preconfirmations
- A committee of restaked operators will need to sign BLS preconfirmations for the ordered batches. Your bridge should hold off on L1 exits until (i) the batch is DA-available and (ii) the batch index is less than or equal to proven_end.
This keeps auditor workflows intact (like replaying from blobs and cross-checking accumulators), helps tighten up operational costs (only one verification needed per range), and ensures that users experience a sub-second UX thanks to preconfirmations.
Emerging best practices we recommend adopting in 2026
- To steer clear of putting all your eggs in one basket, think about using multi-backend provers and “proof markets.” This way, you can mix and match outputs from different provers into one neat package. Keep an eye on the evolving decentralized proving and verification networks and make sure to integrate them into your workflow. (blog.succinct.xyz)
- For circuit and version governance, be sure to share circuit commits and create upgrade playbooks that include overlapping windows and some simulated “break-glass” fallback options with single-proof solutions.
- Let’s talk observability: it’s super helpful to have public dashboards that showcase things like “proof lag,” “unproven tail length,” blob usage, and preconfirmation rates. Plus, don’t forget to release monthly reproducibility attestations!
Checklist: ship continuous aggregation without losing trust or UX
- Events and logs connect each L2 batch to an accumulator, and the public inputs for aggregated proofs pinpoint exact ranges.
- DA makes use of blobs, and thanks to the posting schedule, we can reconstruct stuff as long as we're within the blob retention period and PeerDAS-scaled capacity. (blog.ethereum.org)
- The verifier contract shows off the proven range and circuit versions, plus it has APIs ready for wallets and bridges.
- Preconfirmations are now live! These come with BLS aggregate signatures that are verified through EIP-2537, along with slashing and DA gating. (eips.ethereum.org)
- Settlement proofs can be disputed without permission where it makes sense (think OP/BoLD-like dispute windows). (theblock.co)
- We've got the source-available state reconstruction from L1 all documented and tested. (l2beat.com)
By sticking to this plan, continuous aggregation evolves into a powerful extension of what you already have: a lower L1 footprint, faster proving throughput, at least the same level of auditability (if not better), and quick UX confirmations that are both financially backed and cryptographically secure.
Need some help with planning or executing this migration? 7Block Labs has got you covered! We handle rollup pipelines from start to finish. That means everything--from contracts and DA budgeting to prover orchestration, preconfirmations, and audit tooling--is taken care of, using the same building blocks mentioned here.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

