ByAUJay
How Can I Aggregate Multiple zkBridge Attestations Into a Single Proof to Cut Gas Fees on Ethereum?
A Hands-On Guide to Batching zkBridge Attestations
This guide takes a practical approach to batching zkBridge attestations using recursive proofs, blob-first data pipelines, and BLS-backed attestations. The cool part? You can cut down your on-chain gas fees by 70-99% without sacrificing security.
What You'll Need
Before diving in, make sure you have:
- A solid understanding of zkBridge technology
- Familiarity with blockchain and gas fees
- Basic coding skills (you’ll be working with some code snippets)
Why Batching Matters
Batching helps you save a ton on gas fees, which is especially important when transactions add up. By using zkBridge attestations, you streamline the process, making your transactions more efficient and cost-effective.
The Approach
Here’s how you can go about it:
- Recursive Proofs: These allow you to create proofs that can verify other proofs, significantly reducing the amount of data that needs to be processed and stored on-chain.
- Blob-First Data Pipelines: By prioritizing blob data, you can optimize how information is handled, leading to faster transaction speeds and lower fees.
- BLS-Backed Attestations: Using BLS (Boneh-Lynn-Shacham) signatures enhances security while keeping your gas costs down. These signatures are compact and allow multiple attestations to be aggregated, which is a game changer.
The Implementation
Here’s a simple code snippet to get you started with batching attestations:
def batch_attestations(attestations):
# Your batching logic goes here
pass
You can replace the # Your batching logic goes here with the actual batching and verification processes.
Final Thoughts
By using these strategies, you'll not only reduce your gas fees significantly but also maintain the high level of security that’s essential in the blockchain world. So go ahead, give it a shot, and watch your costs shrink!
In a nutshell, the goal is to bundle a bunch of attestations into a single, concise proof. We'll send the data using blobs, minimize the size of public inputs, and handle verification just once on Layer 1.
TL;DR
- If you’re looking to verify a single Groth16 proof on Ethereum, you’re looking at around ~220k gas. On the other hand, STARK verification is a bit heftier at over 1M gas. But here’s the cool part: if you aggregate N attestations into one proof, you can turn those N verifications into just one, slashing your gas costs by 70-95% or even more, depending on how you design it. (medium.com)
- Think about setting up your pipeline to prioritize blobs. After Dencun (Mar 13, 2024) and Pectra (May 7, 2025), Ethereum is going to offer inexpensive, short-term blob space along with a point-evaluation precompile (0x0A, 50k gas) and increased blob capacity (target/max of 6/9). With EIP-7623, calldata for heavy data posts is getting pricier, so you’ll want to keep your proofs on-chain while storing bulk data in blobs. (blocknative.com)
- Here are three solid aggregation patterns that have stood the test of time:
- Header batching with recursive proofs (think Polyhedra zkBridge style).
- Message-level batching under one or more headers (with membership proofs verified in-circuit).
- Off-chain verification networks combined with on-chain BLS aggregate attestations (which is now more affordable thanks to EIP-2537). (rdi.berkeley.edu)
Why aggregate zkBridge attestations at all?
Each hop on a zkBridge comes with two main costs when it comes to L1:
- Computation: This is all about verifying pairings and MSMs in the verifier contract.
- Data: This involves sending proof bytes and inputs, which used to be pretty heavy on calldata.
Measured Baselines:
- Groth16 verify on BN254: It usually runs around 207-220k gas, plus about 7k for each public input.
- STARK verifiers: These bad boys take over 1M gas.
- Point-evaluation for blob KZG: This one’s a straightforward 50k gas every time. You can check out more details here.
When you check each attestation on its own, the costs add up in a straightforward, linear way. But if you combine them, you can save time and money by doing one constant-cost check. Plus, you can shift a lot of that bulk data over to blobs.
Pattern A -- Batch block headers with recursive proofs (validity light clients)
This method gained traction thanks to Polyhedra’s zkBridge. The idea is to prove a bunch of source-chain headers off-chain, compress them recursively, and then just verify one concise Groth16 proof on the destination chain.
How it works (concretely):
- First off, we whip up a speedy “inner” proof for validating headers, and we can do this using deVirgo or GKR.
- Next, we wrap that proof up with a Groth16 “outer” proof. This keeps the on-chain verifier pretty much the same size.
- We also make sure the batch size, N, is adjustable: if you go for a bigger N, you'll see lower costs per header, but it does come with a bit more waiting time.
- Finally, once the proof checks out, we either store or emit the new light-client state in the updater contract. Check it out more at rdi.berkeley.edu.
Numbers to budget for:
- With two-layer recursion, we’re looking at gas costs dropping to around 220-230k, a huge improvement compared to the tens of millions you'd rack up with traditional light clients.
- It's pretty standard for teams to snag N headers and verify them all at once; you can adjust N depending on your SLA and the gas market trends. (rdi.berkeley.edu)
Why It’s Attractive to Decision-Makers:
- Security: With validity proofs, there's no need for any outside trust.
- Cost: You get near-constant gas fees for each batch.
- Flexibility: You can tweak latency against cost by adjusting N and the power of the prover. deVirgo scales pretty much linearly across M machines. (blog.polyhedra.network)
Best Emerging Practice
- Keep public inputs minimal--ideally, stick to just the final light-client commitment(s) and a batched header commitment. The more public I/O you have, the higher the gas costs go, instead of the “stuff hidden inside the SNARK.” Check out this article for more details: (medium.com)
Pattern B -- Aggregate message attestations under Merkle commitments
When you're dealing with a business object that's all about "many application messages" instead of just "many headers," make sure to group them under roots. Then, you can demonstrate the inclusions within a single circuit.
Design Sketch for Today’s Implementation:
- First up, for every source block (i), gather all the messages that are meant for chain (D).
- Next, create a Merkle tree from those messages and stash away the root (R_i).
- Now, let’s look at the circuit checks:
a) Make sure (R_i) is legit for block (i) (you can verify this through the chain’s header fields and the receipt/state proofs).
b) Confirm each message has a Merkle branch leading back to (R_i).
c) Optionally, you could add extra checks (like nonce monotonicity or some replay protection). - Finally, combine the message trees from multiple blocks into a single proof. This proof will have a public input consisting of a “root of roots” and just the essential metadata. With this setup, one on-chain verification can authorize (K) messages.
Why it’s cheaper:
- Instead of shelling out 220k gas for each message, you only pay around 220-350k gas just once for a whole batch. (medium.com)
Implementation tips that really make a difference:
- Try to avoid having big Keccak footprints on‑circuit. If it's unavoidable, make sure to batch them smartly or take advantage of newer Keccak-batching techniques (like “keccacheck”) to help minimize constraints. If that’s not an option, go for layouts that use KZG / commitments tied to blob data. (iacr.org)
- Keep data and verification separate: store your large message payloads in blobs and just verify the commitments on-chain. When you need to authenticate blob contents against those commitments, make sure to use the 0x0A point‑evaluation precompile. (core.eips.fyi)
Pattern C -- Let a verification layer batch for you; verify one BLS aggregate attestation on L1
Instead of checking each ZK proof directly on Ethereum, you’ve got a few alternatives:
- You can verify proofs on a dedicated verification layer or through a committee.
- Get an aggregated BLS attestation for the results of the verification set.
- Finally, you can verify one aggregate signature on Ethereum using the native BLS12‑381 precompiles that were added in Pectra (EIP‑2537). Check it out here: (eips.ethereum.org)
What Changed in 2025:
- Pectra rolled out seven new BLS12‑381 precompiles (from 0x0b to 0x11), which includes multi‑pairing checks and MSMs. This upgrade makes aggregate signature verification both affordable and secure (around ~128 bits), often offering a lower cost per pairing compared to the older BN254. You can check out more details here: (eips.ethereum.org).
Concrete economics (representative figures):
- Direct Groth16 on Ethereum: over 250k gas each.
- Using a verification layer: around 350k gas as a base to accept a batch, with about “hundreds” of proofs getting amortized to under 1k gas each on-chain since you only need to verify one attestation once. (docs.alignedlayer.com)
When to Choose This:
- You want to cut costs as much as possible and keep things simple on-chain. If you're okay with how the verification layer handles trust and liveness, this option might be right for you.
Make the pipeline blob‑first (post‑Pectra reality)
Two key protocol shifts play a crucial role in keeping costs in check:
- Blobs are a budget-friendly, short-term data option
- EIP‑4844 rolled out blobs (about 128 KiB each, and they're pruned after roughly 18 days) along with a dedicated fee market--making them usually 10 to 100 times cheaper than regular calldata. Plus, verification contracts can check blob contents using the 0x0A precompile, which costs around 50k gas. (blocknative.com)
- Pectra really shifted the field towards blobs
- EIP‑7691 bumped up the blob capacity (aiming for 6/9), and EIP‑7623 upped the floor costs for data-heavy calldata. The takeaway? Use blobs for your data and keep that calldata to a minimum. (eips.ethereum.org)
Practical knobs to turn:
- Fill your blobs: When you're paying for a whole blob, make sure you're packing those batches close to 128 KiB. This way, you can cut down on any waste. Check it out here.
- Failover logic: Try to stick with your main plan and only switch to calldata if you're in the thick of really extreme blob congestion. With a setup of 6 out of 9 blobs and different fee responses, you'll notice that blob prices drop quicker when things are calm and take longer to go up during busy times. Dive deeper into this here.
Worked example: batching 64 zkBridge attestations into one proof
Scenario
- Every 2 minutes, you get 64 attestations (which are basically message inclusions from various source headers).
- Your goal? Keep the end-to-end process on Ethereum mainnet to “under 1 minute.”
Design
- Let's kick things off with a recursion tree that wraps up 64 leaf checks into one sleek Groth16 wrapper.
- For the public inputs, we’ll need a single commitment that covers all the messages, the latest verified source header commitments, and some anti-replay metadata to keep things secure.
- As for the data path, we’ll publish the payloads for each message along with any auxiliary paths in blobs. Just remember to keep the final commitments in the calldata.
- When it comes to verification, a single call to your Groth16 verifier will do the trick. If you want to tie the batch to an on-chain referenced blob commitment, you can optionally add one or two KZG point-evaluation checks. (core.eips.fyi)
Budget
- On-chain verification: About 220-300k gas (using Groth16 with low public I/O).
- KZG point evaluations (if you're using them): 2 × 50k = 100k gas.
- Total per batch: Roughly 320-400k gas, instead of the hefty 64 × 220k = 14.1M gas. That’s a whopping >97% reduction in gas for the verifier step! You can dive deeper into the details here.
Latency
- Proving time has become the new bottleneck. These days, with distributed provers like deVirgo, folks are seeing header proofs taking around 10-20 seconds out in the wild. As for message circuits, they can change based on depth and hashing. To stay within your SLO, consider using recursion along with GPU or clustered proving. Check it out more here: (blog.polyhedra.network).
Implementation checklist (what we deploy for clients)
1) Circuits and Recursion Strategy
- Leaf: Check each attestation membership against the source headers and make sure to normalize the formats right from the start.
- Mid-node: Bundle about 8 to 16 leaves for each recursion node. This seems to hit the sweet spot for Halo2/Plonk-style recursion.
- Wrapper: Use Groth16 on either BN254 or BLS12-381. Keep the public inputs limited to just a few 32-byte words. (medium.com)
Public IO Discipline
- Only share commitments like root(s), batch metadata hash, and destination address.
- Steer clear of revealing individual message fields as public inputs; instead, handle the verification inside the circuit. This helps keep the verifier's gas costs steady. (medium.com)
3) Verifier Contracts
- Make sure to cache your verification keys and keep upgrade paths separate.
- If you’re using an off-chain verification network, consider adding a BLS12‑381 fast-aggregate verification path (with k=2 pairing being the usual choice). This is now natively supported thanks to EIP‑2537. Check it out here: (eips.ethereum.org).
4) Blob Integration
- Combine batch off-chain data and any extra paths into complete blobs, and keep those versioned hashes on-chain.
- When you need to verify the authenticity of blob content in contracts, go ahead and use the point-evaluation precompile. Check it out here: (core.eips.fyi)
- Batching policy
- We're looking at a dynamic N that adjusts according to the blob base fee and demand. Thanks to EIP‑7691’s 6/9 knobs, prices take a nosedive when they’re below target; so, let your scheduler do its thing and auto‑increase N during those budget-friendly times. (eips.ethereum.org)
- Observability and Fallback
- Metrics: Keep track of time distribution, queue age, blob fill percentage, and how much gas is used for on-chain verification.
- Fallback: Use a per-attestation verification path for those “trickle” traffic cases; if things get too busy, switch over to aggregation once you hit a certain threshold.
Gas math you can actually plan around
- Groth16 verify baseline: You're looking at around ~207-220k gas fixed, plus about ~7k for each public input (BN254). It’s smart to keep those inputs small if you want to stay close to that lower end. (medium.com)
- STARK verifier on L1: This can often go over 1 million gas--so if you need some on-chain verification, it’s usually better to go for a “STARK → SNARK wrapper.” (docs.alignedlayer.com)
- BLS12‑381 aggregate signature verify: Using a two-pairing path, you're looking at about 100-130k gas with EIP‑2537; you can scale it up for distinct-message aggregates by passing in n+1 pairs in one call. (eips.ethereum.org)
- KZG point evaluation (0x0A): Each one will set you back about 50k gas. (core.eips.fyi)
Rule of Thumb for CFOs
- Running 64 separate L1 zk verifies? That’ll hit you with multi-million gas fees.
- But if you go with one aggregated verify plus a few KZGs, you're looking at just a few hundred thousand gas.
So, you can usually save around 90-99% on those verification costs!
Engineering pitfalls (and how to avoid them)
- Keccak on-circuit can really drain your resources. Try to batch it as much as you can or leverage the latest research on “Keccak in ZK” if you’re stuck with a certain format. If possible, rely on KZG or SNARK-friendly hashes for your commitments. (iacr.org)
- Keep an eye on your calldata size. Instead of bloating it, post data in blobs and just share the commitments in calldata. EIP-7623 has made heavy data in calldata more pricey structurally. (eips.ethereum.org)
- Adding public inputs can throw off your gas model. Make sure to audit your public IO before launching; adding just 10 extra inputs can bump up the gas by around 70k for every verification. (medium.com)
- When it comes to latency versus cost, remember that aggregation might cut costs but it doesn't necessarily mean you'll see a drop in latency. If you need response times of seconds, consider using folding/IVC to "stream-aggregate" instead of waiting for massive batches. (7blocklabs.com)
Where the ecosystem is heading (and why it helps you)
- Hey, guess what? Production-grade recursive stacks are now up and running in zk bridges! Polyhedra is rocking a two-layer recursion setup with super stable verification costs and adjustable batching options. Check it out here: (rdi.berkeley.edu)
- So, BLS12-381 has become a big deal on Ethereum thanks to Pectra. This means that aggregated attestations are now easy and affordable to verify on L1. You can read more about it here: (blog.ethereum.org)
- Oh, and there’s been an increase in blob capacity (EIP-7691) with some cool asymmetric fee responsiveness. This means that costs drop faster when demand is low, making "blob-first batching" not just cheaper but way more predictable too. More details here: (eips.ethereum.org)
A step‑by‑step plan you can execute this quarter
- Choose your aggregation pattern (A, B, or C) based on your trust model and SLA.
- Revamp the prover pipeline to follow this flow: leaf → mid → wrapper.
- Give the public IO a makeover: you only need one or two 32-byte commitments.
- Shift data to blobs; and if you need on-chain authenticity proofs for blob content, just toss in some KZG checks.
- Roll out the EIP-2537 code paths for aggregated BLS attestations if you're working with an external verifier network.
- Create a dynamic batcher that adjusts based on blob base fees and queue depth.
- Don't forget to instrument everything! Set your SLOs for “time to proof,” “gas per verify,” and “blob fill %.” (core.eips.fyi)
Frequently asked executive questions
- How much are we actually saving?
Most of our clients notice their verification gas costs plummet from millions down to just a few hundred thousand per batch--that's around 90-99% in savings! Plus, there's less variance thanks to blob-first data availability. Of course, the exact amount you save will depend on your public inputs and the size of your batch. Check out more details here. - Will this slow us down?
Aggregation can help save some bucks, but the delay really hinges on how long it takes to prove things and your batch policy. To keep latency within your SLA, think about using folding/IVC or going for smaller recursion fan-outs. Check out this link for more details: (7blocklabs.com) - Is it safe?
Validity-based patterns (A, B) depend on ZK soundness and what happens on Ethereum. Pattern C throws in a verifier network or committee trust assumption but keeps the on-chain checks pretty light by using native BLS. (eips.ethereum.org)
References and further reading
- Polyhedra zkBridge is all about that deVirgo tech with recursive proofs; it keeps on-chain costs pretty stable and uses header batching. You can check it out here.
- When it comes to batch sizes and gas, Groth16 is around 220k, and it adds roughly 7k for each public input. More details can be found in this article.
- The Dencun/EIP‑4844 update has some cool stuff in store, like blobs and the 0x0A precompile (50k), plus it dives into blob economics. Get the scoop over here.
- Pectra mainnet is live now, and it’s tied to EIP‑2537, which introduces BLS12‑381 precompiles. You can read all about it on the Ethereum blog.
- There’s been a blob capacity increase and fee responsiveness through EIP‑7691, along with some changes in calldata pricing via EIP‑7623. Check it out here.
If you’re looking to set up a zkBridge or even upgrade an existing one to handle aggregated attestations, 7Block Labs has got your back. We can provide a blob-first, recursion-powered pipeline that comes with clear and reliable gas and latency envelopes, all built on the design patterns mentioned earlier.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

