7Block Labs
Blockchain Technology

ByAUJay

What’s the Most Gas-Efficient Way to Batch Thousands of Groth16 Proofs on Ethereum for a High-Throughput Rollup?

Short answer: stop checking each Groth16 proof individually on L1. Instead, go for proof aggregation or recursion. That way, one on-chain verification can cover the whole batch. Design your contracts to focus on "one constant-cost check per batch + compact membership commitments," rather than verifying each proof separately. With Ethereum's upcoming Pectra upgrade in 2025 introducing BLS12‑381 precompiles, the best long-term approach is either using Groth16 on BLS12‑381 or creating a Groth16→KZG/Halo2 wrapper. This can be verified once on L1 for just a few hundred thousand gas. (blog.ethereum.org)

TL;DR for decision‑makers

  • Verifying a single Groth16 proof will set you back about 207,700 gas plus around 7,160 gas for each public input when using BN254. This means cramming thousands of verifications into an L1 gas budget just isn't feasible. So, it’s better to aggregate or recurse instead. (medium.com)
  • Nowadays, the best approach is to aggregate off-chain (think SnarkPack for Groth16 or the Halo2/KZG-style UPA), then verify just one aggregated proof on L1. This usually runs you about 300k-600k gas, depending on the method and size of the batch. Plus, starting May 7, 2025, Pectra’s BLS12‑381 precompiles (EIP‑2537) will make BLS-curve verification super affordable and safe on mainnet. (research.protocol.ai)

The constraint: why naive multi-verify fails

Verifying a Groth16 proof on BN254 (alt_bn128) really boils down to the pairing precompile cost we got with EIP‑197, which was later repriced in EIP‑1108. What that means in practice is you're looking at around ~207,700 gas for the base cost, plus about ~7,160 gas for every public signal you add in. Even if you try to pack multiple proofs into a single transaction, the total gas ends up scaling pretty much linearly with the number of proofs. Each proof brings its own multi-scalar multiplications and makes the final multi‑pairing size bigger.

To give you an idea, if you're talking about gas costs of around 200k-250k each, cranking out 1,000 proofs would push you over 200M gas--way more than the ~45M gas that’s available per block on L1. (medium.com)

Key Constants You Can Design Against (BN254 Precompiles):

  • Constants for Bandwidth:

    • These are handy for managing the amount of data you can push through your system without slowing things down.
  • Latency Values:

    • Knowing these can help you design for speed. You'll want to keep your transactions quick and seamless.
  • Resource Limits:

    • Keep an eye on how much of the system's resources you’re using. This info is crucial for optimizing performance and avoiding bottlenecks.
  • Computation Costs:

    • Understanding how much processing power your designs will require can help you make better decisions upfront.
  • Parallel Processing Constraints:

    • If you’re planning on running multiple processes at once, these limits will guide you in avoiding any conflicts or slowdowns.

For more details on how to implement these, you can check out the BN254 Precompiles documentation.

  • ECADD (0x06): 150 gas
  • ECMUL (0x07): 6,000 gas
  • Pairing (0x08): 45,000 + 34,000·k gas (where k is the number of pairings)

These numbers reflect the changes made after EIP-1108 and support the formula we’ve seen in on-chain benchmarks: around ~207.7k fixed plus ~7.16k for each public signal. If you want to dive deeper into the details, check it out here: (eips.ethereum.org).

The 2025-2026 reality also brings us EIP‑7623 (calldata repricing), which puts a bit of a strain on data-heavy transactions, pushing you to cut down on those calldata bytes. Sending thousands of raw proofs or hefty public input vectors is going to cost you twice as much now. Check it out here: (eips.ethereum.org)


What changed in 2025: BLS12‑381 precompiles on mainnet

Ethereum's Pectra upgrade is set to go live on the mainnet at epoch 364032 on May 7, 2025, and it’s bringing some exciting changes! One of the standout features is EIP‑2537, which introduces seven BLS12‑381 precompiles (think G1/G2 addition, MSM, field-to-curve mappings, and the multi‑pairing check). This means on-chain verification using BLS12‑381 is now not only super fast but also consistent and top-notch.

Meta‑EIP 7600 confirms that EIP 2537 is included, so here’s the deal: you’re no longer stuck using BN254 just because of the precompiles. Now, you can set up verifiers and aggregators over BLS12‑381 with roughly ~128‑bit security.

For more details, check out the full info here: (blog.ethereum.org).

Indicative Pairing Costs:

Here’s a quick rundown of the pairing costs you might encounter:

  • BN254 pairing: 45,000 + 34,000·k gas (EIP‑1108)
  • BLS12‑381 pairing: It's precompiled at 0x0f with a fixed base, plus a per-pair cost laid out in 2537. The overall impact is that it offers competitive performance and higher security compared to BN254. To keep the verification math affordable, make sure to use the MSM precompiles (0x0c/0x0e). You can find more details on this at eips.ethereum.org.

The playbook: four ways to batch thousands of Groth16 proofs

1) “Just batch-verify many Groth16s in Solidity” (don’t)

  • By merging per-proof pairing equations into one multi-pairing call, you can save on one 45k base. However, keep in mind that the total k still increases at about 4·n (most Groth16 verifiers typically use 3-4 pairs), which means you're still looking at a linear relationship with n. This approach just won't hold up when you scale to thousands under L1 gas. It's best used as a temporary solution for smaller n. (eips.ethereum.org)

2) Groth16→Groth16 aggregation with SnarkPack (logarithmic verification)

  • SnarkPack combines multiple Groth16 proofs into just one aggregated object, making the verification time and proof size pretty manageable--think logarithmic! And the best part? You won’t need any extra trusted setup beyond a couple of PoT (powers‑of‑tau) transcripts. When it comes to heavy lifting, Filecoin-scale engineering can stack 8,192 proofs in about 8-9 seconds on a 32‑core CPU, with verification clocking in at just tens of milliseconds off-chain. On-chain, you're mostly looking at a small number of pairings plus a few MSMs for gas costs. You can set this up on BN254 or, after the Pectra update, on BLS12‑381. Check out the details here: (research.protocol.ai).

Practical Budgeting Tip:

One super effective way to keep your finances in check is to track your spending. It doesn’t have to be a boring chore, though! You can use apps like Mint or YNAB (You Need A Budget) to get a real-time view of where your money is going.

Here’s a simple approach you can try:

  1. Set a Budget: Start by figuring out your monthly income and expenses. Write it all down or use a budgeting app.
  2. Categorize Your Spending: Break it down into categories like groceries, entertainment, rent, and savings. This helps you see where you might be overspending.
  3. Keep It Updated: Make it a habit to update your budget weekly or even daily. It’ll help you stay on top of your finances and avoid surprises at the end of the month.
  4. Review and Adjust: At the end of each month, take a moment to review your spending. If you overspent in one category, see where you can cut back next month.
  5. Don’t Forget Your Goals: Always keep your financial goals in mind--whether it’s saving for a trip, paying off debt, or just trying to build a safety net.

Staying on top of your budget doesn’t have to be stressful. With a little bit of organization and regular check-ins, you’ll be mastering your finances in no time!

  • Let’s say your SnarkPack verifier is using around O(log n), which is about 13 multi‑pairings for n = 8,192. In that case, the BN254 pairing component would roughly run you about 45,000 + 34,000·13 = 487,000 gas. On top of that, you’ll need to account for calldata and some elliptic curve operations--still just a small fraction of a block. The costs for BLS12‑381 are pretty similar, but you'll get some added security benefits. Keep in mind that this is just a rough engineering estimate; it’s best to measure your exact pair count when you’re integrating. Check out more details on EIP-1108!

Where SnarkPack Shines:

SnarkPack really stands out in a few key areas. Here’s what makes it special:

1. Easy to Use:

Navigating SnarkPack is a breeze. The user interface is super intuitive, so whether you’re a tech whiz or a complete newbie, you’ll feel right at home.

2. Customizable Options:

SnarkPack gives you the flexibility to tailor your experience. You can tweak settings, add personal touches, and make it truly yours.

3. Robust Features:

With a wide range of features, SnarkPack has got you covered! From analytics to integrations, it packs in everything you need to get the job done.

4. Active Community:

You’re not alone on your SnarkPack journey! The community is lively and always ready to help out. Whether you have questions or want to share tips, there’s a space for that.

5. Regular Updates:

SnarkPack is constantly evolving. The team is always rolling out updates based on user feedback, so you know you’re getting a product that’s on the cutting edge.

6. Great Support:

If you run into any issues, SnarkPack’s support team is super responsive. They’re there to help you troubleshoot and keep things running smoothly.

7. Affordable Pricing:

You don’t have to break the bank to use SnarkPack. With competitive pricing and various plans, there's something for everyone without stretching your budget.

Summary

So, if you’re looking for a tool that’s user-friendly, customizable, feature-rich, and backed by a supportive community, SnarkPack is definitely worth checking out!

  • You get to hold onto your existing Groth16 stack, skip any storage writes for each proof, and just commit to a batch root/public-input accumulator on L1.

3) Universal Proof Aggregation (Halo2/KZG) and SNARK wrappers

  • One handy option is to bundle multiple Groth16 proofs together into a single Halo2‑KZG proof and just verify that once on L1. Teams working on this now say the gas cost is around ~350k for each aggregated proof verification. If you add on ≈7k gas for each proof in case you want to store the per-proof status in your contract storage or events (which, by the way, a rollup typically doesn't require), then it's a bit more. But if you skip keeping the per-proof state, you're looking at that ~350k baseline. (blog.nebra.one)
  • This method takes advantage of EIP‑2537 (which deals with BLS12‑381 arithmetic and pairings) as well as EIP‑4844’s KZG precompile (0x0a) when it's helpful. This way, we can keep the L1 verification cost in the low hundreds of thousands of gas, even for larger batches. (eips.ethereum.org)

When to Prefer This:

Choosing the right approach or tool can make all the difference, and here are some situations where you might want to lean towards this option:

  • Simplicity is Key: If you’re looking for something straightforward that gets the job done without a lot of fuss, this is the way to go.
  • Time Constraints: Got a deadline looming? This option often allows you to achieve what you need more quickly.
  • Learning Curve: If you or your team aren’t super tech-savvy, this is a friendly choice that won’t overwhelm you with complexity.
  • Budget-Friendly: Keeping an eye on costs? This option generally comes in at a lower price point, making it easier on the wallet.
  • Proven Reliability: When you want something that’s tried and tested, this choice has a solid track record that gives peace of mind.
  • Community Support: If you value having a community around you, this option often comes with a lot of resources and active forums for help.

Remember, making the right choice depends on your specific needs, so weigh these factors carefully!

  • Heterogeneous proof systems are getting streamlined towards a single settlement check; what you really want are transparent setups like Halo2 and a solidly maintained “universal aggregator” that your team or a reliable vendor can manage.

4) Off-chain verification with BLS attestation, optionally followed by recursion

  • Aligned’s Proof Verification Layer is pretty cool--it checks thousands of proofs off-chain using a decentralized group of operators and sends back a single BLS aggregate signature to Ethereum. When you're looking at a batch with just one proof (any system), it runs you about ~350k gas. But if you scale that up to a batch size of 20, the cost drops to around ~40k gas per proof, all while keeping the verification latency in milliseconds. Plus, they've got a separate Proof Aggregation Service that can recursively compress those verified proofs into a single on-chain proof for about ~300k gas if you ever want that “hard L1 finality.” This whole “two-lane” setup (quick AVS + recursive L1 proof) is really catching on with apps that care about throughput. (blog.alignedlayer.com)

Which is “most gas‑efficient” for a rollup?

For a rollup that needs to finalize on L1:

  • If you’ve got control over the proving stack and don’t mind waiting a few minutes for aggregation, give SnarkPack aggregation a whirl (Groth16→Groth16) or try a Halo2/KZG wrapper. You’ll just need to verify one aggregated proof for each batch on L1. The great thing? Per batch verification costs around ~3e5-6e5 gas, no matter how many constituent proofs are in there. Just keep in mind, this does mean you’ll be trading off some off-chain aggregation time and hardware. You can dive deeper here.
  • If you’re aiming for sub-second L1-visible attestations, an AVS like Aligned might be just what you need for quick BLS-backed results (around ~100k-300k gas per batch). You can even post a recursive proof later for finality if you want. This setup gives you the quickest end-to-end latency, though it does add an extra trust-minimized layer until the recursive proof is in. Check it out here.

In any case, the successful strategy seems to be pretty consistent when it comes to on-chain work per batch--not so much for each proof.


Concrete budgeting: from 10,000 Groth16s to one L1 check

Assume we're dealing with 10,000 small Groth16 proofs (with just a handful of public inputs):

  • Baseline (no aggregation): So, when we look at the numbers, we’re talking about 10,000 times around 220k gas, which gives us roughly 2.2 billion gas. Yeah, that’s totally unfeasible on Layer 1. You can check more about it here.
  • SnarkPack style (engineering estimate): Let’s break it down--using O(log n) pairings plus MSMs, if we assume k is about 14 on BN254, we’re looking at pairing gas around 45,000 plus 34,000 times 14, which totals up to 521,000 gas. When you throw in calldata and EC operations, you can expect the total to remain well under 1 million gas for the verify step. That’s like a whopping 2,000 times reduction in cost for the same correctness claim! Just make sure to measure the pair counts in your actual verifier to get a more accurate idea. You can read more about it here.
  • Halo2/KZG wrapper (as reported): So, it looks like we’re at about 350,000 gas for each aggregated verification. If your batcher doesn’t add any storage costs per proof, then you won’t see much extra gas per proof. In fact, with two or three of those proofs, you could settle 10,000 inputs--especially if you’re using a tree of recursion off-chain. Dive deeper into the details here.

When it comes to L2s, the calculation is pretty much the same. Public data indicates that there's about 775k L2 execution gas for each aggregated verification with a batch size of 32, which breaks down to roughly 46k gas per proof in total, including the query. This really highlights that amortization effect. You can check out more details here.


Engineering blueprint (what we implement for clients)

  1. Choose your curve and aggregation path
  • Greenfield (new circuits): Go for Groth16 instead of BLS12‑381 or a Halo2/KZG outer wrapper. BLS12‑381 offers about 128-bit security and has native precompiles at 0x0b-0x11, making multi-scalar multiplications and pairings affordable enough to be first-class in Solidity. Check it out here: (eips.ethereum.org).
  • Existing BN254 circuits: You've got two choices: (a) stick with BN254 and aggregate using SnarkPack on BN254, or (b) wrap your BN254 Groth16 proofs into a BLS12‑381 Halo2/KZG aggregator and then verify on L1 just once via the BLS12 precompiles. Both options help you skip the per-proof verification hassle. Dive deeper here: (research.protocol.ai).

2) Minimize Public Inputs and Calldata

  • When it comes to Groth16, the gas cost jumps by about 7,160 for each public input on BN254, and with EIP‑7623, things get pricier for transactions that are heavy on calldata. To tackle this, consider hashing or using Merkle commits for your public inputs off-chain and just expose a batch root on-chain. This tactic helps keep your on-chain payload slim and safe from the EIP‑7623 constraints. Check out more details here.
  1. Stick to deciding “which proofs to include” without getting into per-proof details
  • Just add a Merkle root (or vector commitment) of the public inputs/hints for each proof right in the aggregated statement. Then, you only need to verify one proof on L1. If you’re looking for selective membership checks in other contracts, go ahead and verify an inclusion proof against the batch root instead of accessing the storage for every single proof. This approach cuts down the ~7k storage/bookkeeping overhead that shared aggregators usually deal with. (docs.nebra.one)
  1. Keep the L1 verifier modular and upgradable
  • Use a timelocked proxy for the verifier, letting you easily switch between BN254↔BLS12‑381 or SnarkPack↔Halo2 as the ecosystem changes. With Pectra’s precompiles, future upgrades will be smoother and more cost-effective. (eips.ethereum.org)
  1. Provision aggregation hardware for your SLO
  • SnarkPack is pretty impressive, managing to generate 8,192 proofs in just about 8-9 seconds on a 32-core CPU. If you're looking to scale up to 100k+ proofs per batch, consider using a tree structure (think local aggregations leading to a higher-level aggregate) to really ramp up parallelism. Just a heads up, latencies for Halo2/KZG recursion can stretch into minutes when dealing with large batches. So, if you want to keep things snappy for your users, try a two-tier approach--fast BLS attestation for quick responses, paired with the slower recursive proof for the backend. Check it out here: (research.protocol.ai)

Solidity-level notes that save gas

  • Try to use just one multi-pairing call for each verification. Whether you’re dealing with BN254 (0x08) or BLS12-381 (0x0f), make sure to bundle all needed pairs in that single call. This way, you won't get hit with the precompile’s base cost more than once. Check it out here: (eips.ethereum.org).
  • When you’re working with BLS12-381, it’s a good idea to go for MSM precompiles (0x0c/0x0e) instead of rolling your own ECMUL loops. The precompiles come with some nice batched discounts and help you skip the extra overhead. More details can be found here: (eips.ethereum.org).
  • If you really need to verify signatures (like for operator attestations), using “fast aggregate verify” is your best bet. It’ll run you about the equivalent of one k=2 pairing check plus some mapping for common messages--roughly around ~100k gas for the pairing on BLS12-381, not counting calldata. It’s also smart to aggregate pubkeys off-chain. Find more info here: (eips.ethereum.org).
  • Steer clear of per-proof SSTORE if you can help it. If you need auditability, just emit a single event with the batch root and use that as your reference across contracts. This approach is similar to the ~350k-only pattern found in Halo2/KZG aggregators, where storing per-proof data isn’t mandatory. Check out the details here: (docs.nebra.one).

“Which option should we pick?” A quick decision guide

  • If you’re already using Groth16 and want to keep things simple with solid L1 finality and the lowest gas fees:
    Go with SnarkPack aggregation using either BN254 or BLS12‑381 (after Pectra). You’ll be able to verify each batch just once on L1. You can look forward to verification costs under a million gas, even when handling batches with thousands of proofs. (research.protocol.ai)
  • If you’re looking for a drop-in service that’s easy to set up and keeps your budget in check:
    Check out a Halo2/KZG universal aggregator. Just make sure to design your batch interface in a way that skips per-proof storage so L1 verification stays around 350k gas. (docs.nebra.one)
  • If you’re looking for super quick attestations while still having the option for hard finality down the line, check out an AVS like Aligned. It’s perfect for off-chain verification and BLS aggregation (think about ~350k gas for each batch and around ~40k per proof when n=20). Then, just post a recursive proof (~300k gas) at a frequency that works for your comfort level. (blog.alignedlayer.com)

Example: turning 8,192 Groth16 proofs into one L1 check

  • Off‑chain: You can aggregate using SnarkPack in about 8 to 9 seconds when you've got a 32-core CPU running. Don't forget to include a Merkle root of all the public inputs for each proof in the aggregated statement. Check out more details here.
  • On‑chain: To verify the aggregated proof, you'll just need one multi-pairing call (choose either BN254 0x08 or BLS12-381 0x0f). If you're doing a little engineering math, the pairing component runs at O(log n). For about 13 pairs on BN254, you’re looking at around 487k gas. When you factor in calldata and a few EC operations, you’ll usually end up under 1M gas. That's a whopping ~99.95% reduction compared to verifying 8,192 proofs individually! Just make sure to double-check your pair count and calldata size as you're integrating. For more info, check this link: EIP-1108.

If you're looking for a wrapper option, you can create a single Halo2/KZG proof that confirms "all 8,192 Groth16s verified." After that, just verify it once on L1 for about 350k gas. The cool part? There's no need for per-proof storage if you're just settling the rollup state root. Check it out in the documentation.


2026‑ready best practices (brief but in‑depth)

  • Start by designing for BLS12‑381, unless you're already committed to BN254. Pectra provides quick BLS MSMs and pairings, better security, and makes it easier to work with blob/KZG tools. If you're still using BN254 right now, make sure you keep a way to switch verifiers later on. (eips.ethereum.org)
  • Keep your public inputs small. Hash to the field off-chain and commit to roots on-chain. Each additional public input will cost you around 7,160 gas (for BN254 Groth16), and thanks to EIP‑7623, calldata got a bit heftier for data-heavy transactions. (hackmd.io)
  • Differentiate “attestation latency” from “L1 finality”: If you need ultra-fast user experience with sub-second confirmations, start with BLS-aggregated attestations. Then, every N blocks, post a recursive proof to lock in finality without breaking the bank. (blog.alignedlayer.com)
  • Skip per-proof accounting on L1: Instead, just store a root. If your downstream contracts need to check each proof, let them verify Merkle inclusion against that root. This way, you can ditch around 7k gas in proof book-keeping costs that universal aggregators rack up when they track each proof individually. (docs.nebra.one)
  • Don’t just take a wild guess--measure everything! Before you hit mainnet, check your verifier’s exact multi-pairing count and calldata footprint. Then, use EIP‑197/EIP‑2537 pricing to figure out precise upper limits that you can confidently present in a board meeting. (eips.ethereum.org)

Bottom line

  • If you want to maximize gas efficiency when batching thousands of Groth16 proofs for a high-throughput rollup, the best strategy is to skip the per-proof verification altogether and just settle one aggregated or recursive proof on L1.
  • For those starting fresh, aim for BLS12-381 (post-Pectra) and go for SnarkPack aggregation or a Halo2/KZG wrapper, which will result in a single verification costing around ~3e5-6e5 gas per batch. If you're looking to keep latency low, mix in quick BLS attestation with some periodic recursive proofs. This approach is what’ll help you hit both your gas and throughput targets by 2026. (eips.ethereum.org)

References and further reading

  • Here’s a quick look at Groth16 verification costs: it breaks down to about 207,700 gas for fixed costs and around 7,160 gas for each public input using BN254 precompile gas. You can check out more details here.
  • SnarkPack is pretty cool for Groth16 aggregation, managing to aggregate 8,192 proofs in around 8 to 9 seconds with a logarithmic verifier. For the deep dive, head over to this research paper.
  • Mark your calendars for May 7, 2025, because that’s when Pectra will hit the mainnet, along with the rollout of EIP‑2537, which includes BLS12‑381 precompiles. Learn more about it here.
  • When it comes to NEBRA UPA (using Halo2/KZG aggregation), you’re looking at about 350k gas for each aggregated verification, plus an optional 7k gas for proof storage. More details can be found in the documentation.
  • The Aligned Verification Layer and Aggregation Service operate at about 350k gas per batch, with around 40k gas per proof when you have a batch size of 20. If you're dealing with recursive proofs, expect around 300k gas. Check out the full scoop here.

Description

When it comes to processing thousands of Groth16 proofs on Ethereum, there's no need to verify each one separately. Instead, you can aggregate them or use off-chain recursion techniques like SnarkPack or Halo2/KZG. This way, you just need to verify a single proof on L1, which only costs a few hundred thousand gas. Plus, with Pectra’s BLS12‑381 precompiles, taking the BLS route is not just cheaper but also more secure moving forward. Check it out here: (research.protocol.ai)

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.