ByAUJay
Would Rolling Up Thousands of Tiny Proofs Into One Aggregated Proof Noticeably Cut Latency for Cross-Chain Oracle Updates?
Short answer: sometimes--but it really depends on whether on-chain verification throughput is the thing slowing you down. For a lot of cross-chain oracle routes expected in 2025-2026, the biggest delays come from source-chain finality and the time it takes for relayers and execution. So, in these cases, aggregation is mainly about saving on costs rather than speeding things up in real-time.
Who this is for
Decision-makers looking to set up or enhance their Oracle, bridging, or cross-chain data pipelines and need solid data, up-to-date stats, and a hands-on guide.
Executive summary
- When you combine a bunch of small proofs into one big proof, it can really help out in situations where blockspace or verifier capacity leads to multi-block queuing on the destination chain. By aggregating like this, you can speed up the time it takes for something to be included, which in turn cuts down on how long it feels like you're waiting. However, in most other cases, it doesn't really make much of a difference. (blog.alignedlayer.com)
- Looking ahead to 2025-2026, you're going to find that the main culprit behind cross-chain latency is all about the source chain’s finality policy. For instance, Ethereum’s “finalized” status usually takes about 12.8 to 15 minutes. Any oracle and bridge stacks that hang around waiting for that "finalized" signal are going to run into that same delay; unfortunately, cryptographic aggregation doesn’t really help with this one. (docs.chain.link)
- If you're working with stacks that utilize BLS signatures or zk proofs, keep an eye on Pectra’s EIP‑2537 (BLS12‑381 precompiles) and EIP‑7691/7623 because they’ve really shaken up the economics: pairing checks are more affordable using BLS12‑381 compared to BN254, blobs have gotten bigger and cheaper, while designs that rely heavily on calldata have become pricier. So, it’s a good idea to aggregate to save on gas and make everything fit into one transaction. As for latency improvements, those can vary based on the situation. (blog.ethereum.org)
What “aggregation” actually means (and why it’s easy to overpromise)
“Rolling up thousands of tiny proofs” can actually mean three different things:
- zk Proof Recursion: This is all about merging a bunch of proofs into a single recursive proof that gets verified on-chain just once. It’s a smart way to save on costs, but keep in mind that the proving latency can creep up as you increase your batch size--unless you get clever with parallel processing and tree aggregation. For more details, check out the article on Polygon's blog.
- Signature Aggregation: Here, you combine multiple publisher or validator signatures into one sleek BLS aggregate signature, which can then be checked on-chain with just a single pairing check. This is a fantastic way to boost throughput and cut down on calldata. Just a heads up: the latency improvement really shines through only if your on-chain signature verification was the bottleneck in the first place! You can read more about it in the EIP-2537 documentation.
- Attestation Aggregation: This method involves relaying a Merkle root for a whole bunch of messages (think CCIP/Wormhole roots or Pyth Merkle bundles) so that the destination only has to check one root followed by multiple inclusions. It’s mostly about saving gas and bytes, but just remember that the end-to-end latency will depend on the finality of the source chain and how fast the relayers are. You can dive deeper into this in the docs from Chainlink.
If your cross-chain oracle waits for source-chain finality (many do), that “finality wait” dwarfs the few milliseconds of signature checks or the seconds of recursive proving--so aggregation won’t make seconds-level updates appear out of a 15-minute finality policy. (docs.chain.link)
Current latency anchors you can actually plan around
- Chainlink's CCIP really takes its time with source finality; their reference table shows that Ethereum sits at around 15 minutes for “finalized,” while many Layer 2s are in the tens of minutes range (we're talking finality tag or block-depth here). This adds up to quite a bit of message latency for any L1→L1 or L1→L2 journeys. (docs.chain.link)
- Wormhole operates on set “consistency levels” per chain (like ~14 seconds for Solana and ~19 minutes for Ethereum) before their Guardians go ahead and sign a VAA. So, if you’re sending a message, it won’t arrive any quicker than those consistency timers allow. (wormhole.com)
- Hyperlane validators play it safe by waiting for reorg-safe depths on each chain (think around 10 blocks or about 20 seconds for Base) before they send anything out. A case study mentioned median production latencies of around 31 seconds, which can definitely be hit with well-tuned routes. (docs.hyperlane.xyz)
- With zk light-client bridges, you lose the need for external trust, but you gain a bit of proof time. As for public guidance right now, especially for Ethereum-anchored routes, it’s usually “finality (~15 min) + proving (seconds to minutes)” which can end up being about 20 minutes for more cautious setups. (7blocklabs.com)
Takeaway: As long as the verification throughput at the destination isn't maxed out, aggregation mainly helps reduce costs rather than impacting the 95th percentile latency. (blog.alignedlayer.com)
The two cases where aggregation really does cut latency
- When on-chain verification is slowing things down
- Reality: Ethereum blocks have a capacity of about 30 million gas, and even the most efficient SNARK verifiers still rack up around 200,000 to 300,000 gas each. This means that destination chains can only handle a limited number of verifications before messages start spilling over into future blocks. When that happens, it leads to increased queueing latency. (hackmd.io)
- Remedy: To tackle this, we can verify off-chain (for example, using an EigenLayer AVS) and then post a single aggregated attestation to L1. Another option is to whip up a single recursive proof. Aligned’s Proof Verification Layer hits over 2,500 proofs per second on the testnet and manages hundreds per second on the mainnet fast path. The best part? The on-chain footprint is just one aggregate with BLS signatures, which gets rid of those pesky multi-block queues. (docs.alignedlayer.com)
- What you gain: By cutting down on queueing delays when things get busy, you’ll end up with smaller end-to-end tails (think p95/p99) when the destination block space is the bottleneck.
2) When Per-Update Signature Storms Take Over Your On-Chain Time
Dealing with per-update signature storms can be a real headache. When these storms hit, they tend to dominate your on-chain activity, impacting everything from transaction times to overall user experience. Here's what you need to know to navigate through this chaos:
What Are Per-Update Signature Storms?
Per-update signature storms occur when there’s a surge in the number of signatures needed for updates on the blockchain. This can be triggered by various factors, including:
- Increased transaction volume
- Smart contract updates
- Network congestion
How Do They Affect Your On-Chain Activity?
When these storms roll in, you might notice:
- Slower transaction processing times
- Higher fees for priority transactions
- Frustration from users who experience delays
Tips for Managing Signature Storms
While you can’t completely avoid signature storms, there are ways to manage their impact:
- Monitor Network Activity: Keep an eye on transaction volumes and signature requirements. This can help you anticipate storms before they hit.
- Optimize Smart Contracts: Make sure that your smart contracts are as efficient as possible. Reducing the number of required signatures can help ease the burden.
- Leverage Layer 2 Solutions: Consider using Layer 2 scaling solutions that can help reduce the load on the main chain, allowing for quicker transactions during storms.
- Communicate with Your Users: When delays happen, let your users know what to expect. Transparency can help keep frustration at bay.
In Conclusion
Per-update signature storms can be challenging, but with a little preparation and strategy, you can help minimize their disruption to your on-chain activities. Stay informed and proactive, and you'll navigate through the storms like a pro!
- If your oracle setup needs to check a ton of publisher or validator signatures with every update, consider switching to BLS aggregate verification. After EIP‑2537 (Pectra, May 7, 2025), Ethereum welcomed the BLS12‑381 pairing precompile at address 0x0f. It comes with a gas cost formula of 32,600·k + 37,700 for each k pairings, making it more affordable per pair compared to BN254. Plus, there are fast MSM precompiles available for aggregation. What’s the impact? You can reduce N verifies down to just one, which speeds up inclusion time. (blog.ethereum.org)
When aggregation doesn’t help (and can even hurt)
- When your pipeline is all about finality (like how CCIP waits for “finalized” on Ethereum), keep in mind that aggregation isn't going to outpace the finality clock. Tweaking your consistency and finality policy--like choosing between “block depth” and “finalized”--can give you a much bigger boost in reducing latency than messing around with cryptography. Check it out here: (docs.chain.link).
- Big recursive batches can really slow things down when it comes to proving. On standard hardware, while modern stacks can aggregate at a pace that feels human-friendly, they're not quite there for sub-second speeds:
- Plonky2 recursion has some sweet sub-second features, but when you're aggregating hundreds to thousands of proofs, it still takes a few seconds, even with a powerful 4090. For example, recent benchmarks show it takes about ~6.1 seconds for 1024 RISC0 proofs. That’s great for spreading out costs, but not so much for those “tick-by-tick” oracle updates. You can read more about it here: (telos.net).
- zkVMs like SP1 are pushing the limits on GPU proving and continue to shave off time, but you should still plan on seconds--not milliseconds--when you’re working with larger recursive wraps these days. If you're curious, check out the details here: (succinct.xyz).
Concrete, current numbers to calibrate your design
- BLS On-Chain Verification (Post-Pectra)
- The BLS12-381 pairing precompile address is 0x0f, and the cost for k pairings is about 32,600·k + 37,700 gas. So, when you’re verifying a single signature with two pairings, it’ll use around 102,900 gas (excluding calldata). If you’re doing a distinct-message aggregate verification with n signers, that’s just one call where k = n + 1. (eips.ethereum.org)
- Groth16 (BN254, Post-EIP-1108)
- For the typical verifier, you’re looking at around 200k-300k gas, depending on the public inputs. The pairing precompile is usually the biggest contributor, coming in at about 34,000·k + 45,000. (eips.ethereum.org)
- Destination Blockspace Saturation → Queuing
- If each oracle update takes about 250k gas, an Ethereum block (which has a limit of around 30M gas) can squeeze in roughly 120 updates. The 121st update? Yeah, that one’s gonna have to wait for the next block. But here’s the silver lining: if you aggregate those 120 verifies into one, you can dodge the multi-block queuing and save some precious minutes when the network’s busy. An AVS path (like Aligned) can squash thousands of verifies into just one L1 result. (blog.alignedlayer.com)
- Finality Anchors (What Actually Affects Wall-Clock Time)
- When we say Ethereum is “finalized,” we’re talking about a timeframe of around 12.8-15 minutes. Many Layer 2 solutions latch onto L1 finality timelines for cross-chain dealings. CCIP has a handy per-chain table that lays out minute-scale latencies for L1s and various rollups; if you’re aiming for seconds, you might have to deal with a bit of loose consistency or tap into fast-finality sources. (docs.chain.link)
- As for Wormhole consistency levels, you’re looking at around 14 seconds for Solana, and 18-19 minutes for Ethereum, Optimism, and Arbitrum. Keep in mind that Guardians only sign after those timeframes, so “aggregating your way” around this isn’t an option. (wormhole.com)
- With Hyperlane, validators hang tight until they have reorg-safe block depths for each chain. The depth tables indicate that Base 10 blocks take about 20 seconds. (docs.hyperlane.xyz)
Case studies: what changes if you aggregate?
- Moving ETH from L1 to OP Mainnet using CCIP and just hanging tight for that “finalized” status on Ethereum.
- If we skip aggregation altogether, the time it takes for end-to-end processes is about 25-30 minutes when you factor in Ethereum finality, relay time, and OP inclusion. You can check it out here: (7blocklabs.com).
- On the other hand, when you do use aggregation, the cost of verifying each message on OP can go down (thanks to a single batched root). But you'll still be waiting around 15 minutes for ETH finality, so your p95 really isn't budging much. If you're feeling a bit risky, you can use “block depth” to cut down waiting time, but keep in mind that aggregation isn’t going to help with that. For more details, check this: (docs.chain.link).
2) Solana → Ethereum via Wormhole (Finalized Settings)
- Without aggregation: On Solana, Guardians take about ~14 seconds to sign; when you're on Ethereum, it's all about verifying the VAA and executing. In practice, the p50 is usually around ~30-60 seconds from start to finish, even with reasonable gas settings. (7blocklabs.com)
- With aggregation: If your app needs to redeem a bunch of VAAs in a single block, go for aggregated execution/verification. This approach reduces calldata and signature checks, which helps you slash tail latency during heavy loads since you can avoid the hassle of multi-block queuing on Ethereum. Remember, source-side finality (14 seconds) and destination inclusion are still the key time drivers here. (wormhole.com)
3) ZK Light-Client Validation (Ethereum Header → Destination Chain)
When we talk about ZK light-client validation, we're diving into how the Ethereum headers get transformed and validated on the destination chain. It’s all about making sure everything is legit while keeping things light and efficient.
Here’s the lowdown on how it works:
- Receiving Ethereum Headers: The light client first grabs the latest Ethereum block headers. These headers are crucial because they contain the info needed to verify transactions.
- Zero-Knowledge Proofs: Now, this is where the magic happens. The light client uses zero-knowledge proofs to confirm that the information in the Ethereum header aligns with what’s on the destination chain, without revealing all the underlying data.
- Verification Process: The light client checks the proof against the destination chain. If everything checks out, it confirms the block’s legitimacy. This is super efficient because it doesn't require downloading the entire blockchain - just the necessary headers and proofs.
- Finality: Once the proof is validated, the destination chain can confidently add the Ethereum block to its own chain. This ensures that the state is consistent and secure.
In short, ZK light-client validation bridges Ethereum and the destination chain seamlessly, leveraging zero-knowledge proofs for security while keeping resource usage minimal.
Key Benefits:
- Efficiency: By only focusing on headers and proofs, it saves tons of bandwidth and storage.
- Security: Zero-knowledge proofs bolster security without exposing sensitive information.
- Interoperability: This method makes it easier for different blockchains to work together smoothly.
For more details, check out the Ethereum Docs.
- Without aggregation: You’re looking at one proof for each header. Verification is somewhat affordable, but you end up generating proofs at every hop. The wall-clock time is roughly equal to finality plus proving (often takes seconds to minutes) plus inclusion. (7blocklabs.com)
- With recursive aggregation: Here, you can batch multiple headers into a single proof. This makes verification a breeze with just one inexpensive on-chain check, but the proving process takes a bit longer (in seconds). So, it’s best to aggregate only when it helps avoid block queuing or keeps your L1 fee budget in check. Some teams are reporting a “prove+verify” time of 12-20 seconds for Ethereum headers in well-optimized pipelines, but remember, you’ll want to validate this based on your specific route and hardware. (blog.polyhedra.network)
Pectra changed your cost model--design accordingly
- BLS12‑381 Precompiles (EIP‑2537): These make using BLS signatures and BLS-curve SNARK verification a lot more practical and cheaper per pair than BN254. If you’re dealing with multi-signer oracle attestations, it’s a good idea to go with BLS aggregates to save on-chain time and calldata. Check out more here.
- Blob Throughput Doubled (EIP‑7691): We’ve got more blobspace available, and it's cheaper too, which is a win for rollups and data-heavy systems. If your oracle or bridge is sending out data batches, definitely lean towards blobs instead of calldata and maybe tweak how often you post. More details can be found here.
- Calldata Floor Cost Increased (EIP‑7623): Heads up--data-heavy transactions are getting more expensive. The bright side? Aggregation can help you cut down on the calldata bytes per update, which can keep you within your fee/intake budget. Just remember, it’s really a cost win first and foremost. You can read more about it here.
How major oracle stacks actually move data cross-chain today
- Chainlink Data Streams: This offers super quick off-chain delivery--think sub-second--while still letting you verify things on-chain if you need to. It also plays nicely with cross-chain actions, sticking to the bridge/messaging layer’s finality policy. This is a win for trader user experience, while ensuring that settlements stay tied to finality. Check out more details here.
- Pyth: Pyth aggregates data on Pythnet and shares Merkle roots through Wormhole. Integrators can grab the latest updates from Hermes and send in proofs whenever needed. The latency mainly hinges on how often publishers push updates and the overall consistency/finality, rather than on-chain proof verification times. Plus, aggregation helps cut down on calldata and gas costs for each update. You can learn more here.
- Hyperlane: This is all about smart relaying that’s aware of finality, and you can tweak the block-depth as needed. In some production case studies, they've reported a median settlement time of around 31 seconds for certain routes. And during busy times, they can use signature or message aggregation to shorten those inclusion tails. For further details, visit this link.
Emerging best practices to actually reduce p95 latency
- Think of finality as a dial instead of a fixed point. If your risk policy allows, consider switching from “finalized” to “block depth” on certain origin chains to speed things up from minutes to seconds. Just make sure to clearly lay out the risk trade-offs. A great reference for this is CCIP’s per-chain table. (docs.chain.link)
- Break out the “hot path” from the settlement process:
- Hot path: Use quick attestations (like guardian/multisig or AVS) and perform BLS aggregate verification at the destination to achieve a sub-minute user experience.
- Settlement path: Rely on periodic zk checkpoints or finalized commits for reconciliation and fraud resistance. Check out more on this here. (blog.alignedlayer.com)
- When the destination is a bottleneck, leverage AVS verification. It’s smart to shift proof verification to a restaked AVS that runs natively on bare metal, then consolidate the results into one aggregated outcome written to L1. This helps avoid those pesky block-by-block backlogs when things are under pressure. (blog.alignedlayer.com)
- Make sure to right-size your zk batches. If you absolutely need to use recursion, keep the tree depth shallow and set batch targets small enough to stick to your SLO. For example, aim for proving times of 1 to 5 seconds on your GPU cluster instead of going for “thousands per batch” by default. (telos.net)
- Shift bytes from calldata to blobs. After EIP-7623, prioritize proofs, roots, and metadata in blobs when you can; think of calldata as your backup option. This approach helps lower fees and avoids unnecessary delays in the mempool. (eips.ethereum.org)
- Don’t just tune proving; focus on inclusion too. Make your first inclusion attempts expensive on the destination chain, add dynamic tips for retry attempts, and switch to private relays when public mempools are lagging. This can really help cut down on wait times at the 95th percentile. (7blocklabs.com)
- Lastly, get serious about real metrics. Track the timing for each stage (like finality wait, proof time, relay time, inclusion); and don’t forget to publish route-level P50/P95/P99 metrics. Setting “batch age” limits can also help prevent aggregation from hiding any stalls. (7blocklabs.com)
Worked examples with numbers
- Scenario A: Imagine you’ve got 5,000 small SNARK receipts that need to get onto Ethereum during a crazy market spike.
- Non-aggregated: Each receipt takes about ~250k gas, adding up to a whopping ~1.25B gas in total. Even if we spread them out over several blocks, those tail messages still end up waiting minutes.
- Aggregated via AVS or recursion: You can condense that into just one on-chain verify, which costs about ~300k gas if you’re using a recursive SNARK, or around ~350k for an AVS batch with an aggregated BLS attestation. This way, all the receipts become usable as soon as that one transaction is confirmed--bringing down the wait time from “many blocks” to just “one block.” (blog.alignedlayer.com)
- Scenario B: We’re looking at price updates flowing from Solana to the EVM through Wormhole.
- Guardian signatures typically show up after about ~14 seconds. If you set your gas right, the execution on EVM takes a few seconds. By aggregating multiple VAAs into a single redemption, you not only save on gas but also prevent block overflow when the traffic spikes. Just keep in mind that while this cuts down the tails, it won’t touch that ~14-second baseline. (wormhole.com)
- Scenario C: Think about verifying Ethereum headers to a target chain using a zk light client.
- Some teams are seeing around ~12-20 seconds for the “prove+verify” process of Ethereum headers in optimized pipelines. If you opt for recursive aggregation, it can lower the cost when you verify many headers at once. But, that comes at a cost--increasing the time you have to wait before you can act on a specific header. It’s all about finding the right batch size to align with your SLO. (blog.polyhedra.network)
A quick decision checklist
- Is your p95 being affected by source-chain finality?
- Yes → Looks like aggregation won't cut it; it’s time to rethink your consistency policy and routing.
- Are you running into on-chain verification or calldata limits at your destination?
- Yes → Consider aggregating (like using BLS for signatures and recursion/AVS for proofs) to dodge those pesky multi-block queues.
- Do you need strict validity with every update?
- Yes → Be ready for proving times that could stretch from seconds to minutes, or set up a dedicated GPU cluster; if user experience is a must, think about a hybrid approach with “fast attest + periodic zk.”
- Are your fees getting stuck in the mempool post-EIP‑7623?
- Yes → Time to shift your data to blobs and aggregate to trim down that calldata; don’t hesitate to overpay for the first inclusion.
Bottom line
- Aggregation is a game changer when it comes to cost and throughput. It can really cut down on latency, but only if your main hold-up is on-chain verification capacity or those pesky per-update signature checks.
- Now, if you're dealing with cross-chain oracle updates that take the time to wait for a safe kind of finality (like Ethereum's “finalized” status), don’t expect to magically shift from minutes to seconds just by “rolling up proofs.” The magic happens when you fine-tune your finality, use quick attest paths, and carefully plan your relay/inclusion strategy. Just remember to aggregate to keep costs reasonable and tails nice and short. (docs.chain.link)
References and further reading
- Check out the Chainlink CCIP execution latency and per-chain finality details here: (docs.chain.link)
- Exciting news! The Ethereum Pectra mainnet announcement is out, and it includes EIP-2537/7691/7623. Get the scoop at (blog.ethereum.org)
- Dive into EIP-2537 BLS12-381 precompiles and gas formulas. All the info you need is right here: (eips.ethereum.org)
- Check out the updated gas repricing for EIP-1108 BN254 precompile, featuring some baseline Groth16 costs. More details are available at (eips.ethereum.org)
- Want to know about the Aligned Layer? Learn about verification throughput and how it stacks up against fast-path AVS in this cool post: (blog.alignedlayer.com)
- Discover the latest on Plonky2 recursion and some impressive aggregation benchmarks. Check it out here: (polygon.technology)
- Learn about Wormhole's consistency levels and the VAA verification flows. You can find all the info you need on their site: (wormhole.com)
- Get insights into Hyperlane's block-depth/latency configuration along with a production case study. More details are found here: (docs.hyperlane.xyz)
- Lastly, explore the Pyth cross-chain and the Hermes pull model. All the info is available at (docs.pyth.network)
Meta description
Aggregating small proofs can really help reduce cross-chain oracle latency, but only if the bottleneck is your on-chain verification throughput. If that's not the case, then finality and relay times are going to take the spotlight. In this article, I'm sharing some solid numbers for 2025-2026, pointing out where aggregation makes a difference, and providing a handy playbook that’s aware of Pectra’s specifics.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

