ByAUJay
What Cost Savings Should I Expect When Migrating From Single-Proof Verification to High-Throughput Batching on Polygon?
TL;DR for decision‑makers
- Nowadays, if you want to verify one Groth16 proof on an EVM chain, it’ll set you back around 200k to 250k gas, plus about 7k for each public input. And when it comes to STARK verifications, you’re often looking at over 1 million gas. On Polygon PoS, that means you're spending approximately $0.05 to $0.10 per Groth16 verification at the current gas and POL prices. (medium.com)
- If you switch to batching, you can usually bring down the gas cost per proof to around 15k to 50k, which is a whopping 70% to 95% savings! Plus, with some clever tricks like recursive aggregation, you can aim for a constant-cost “O(1)” verification that uses about 350k gas per batch. And for those attestation-heavy flows, signature aggregation (BLS) can help you keep things steady at around 113k gas flat. (blog.nebra.one)
Why this matters now on Polygon
- As of September 4, 2024, Polygon PoS has switched to using POL as its gas token; the migration from MATIC to POL is officially wrapped up. So, now all the fees you deal with and optimize for are in POL. You can check out more about it here.
- The network's been stepping it up by increasing capacity and cutting down confirmation times. They’ve introduced PIPs to raise the block gas ceiling and rolled out Heimdall v2 for quicker finality. With these capacity boosts, larger batches can fit into each block, and faster finality helps reduce the wait time when you’re eager for a batch to confirm. If you want to see the details, head over here.
- Check out the current Polygon gas tracker: you’ll find base fees are under a cent for simple transfers, and blocks are usually under 50% utilized. That means there’s often plenty of room for batches to fit in. For more info, visit polygonscan.com.
Baseline: what a “single-proof” costs on EVM today
For Groth16 on BN254 (also known as alt_bn128), the gas calculations after Istanbul/EIP-1108 are pretty clear-cut:
- When it comes to pairing checks, you're looking at a base cost of 45,000 gas plus 34,000 for each pairing. Most Groth16 verifiers typically need about 3 to 4 pairings, which means you're racking up around 147k to 181k gas just for those pairings. Once you throw in the calldata and some scaffolding, you'll end up spending roughly 200k to 250k gas (plus around 7,160 gas for each public input). (eips.ethereum.org)
- For the calldata costs, the baseline is set at 4 gas for every zero byte and 16 gas for non-zero bytes (thanks to EIP-2028). Since proof payloads usually have a lot of non-zero bytes, you can expect around 4,096 gas for the calldata of a 256-byte Groth16 proof (before considering public inputs). (eips.ethereum.org)
Reality Check on Polygon PoS at Today's Prices (January 2026)
- The base gas on Polygonscan is sitting around 2,300-2,400 gwei, while POL is about $0.128. So, if you're looking to verify a transaction with 220k gas, it’ll cost you roughly 0.517 POL, which is around $0.066. Just adjust based on the gas prices you’re seeing. You can check it out here: (polygonscan.com)
Throughput Limits with One-by-One Verification
When it comes to one-by-one verification, you're looking at around 250k gas per verification. With a per-block gas budget sitting at about 45M (which is a pretty good average for the chain), this puts you in the ballpark of about 180 to 200 proofs per block. Even if you’re on a speedy chain, this can really hold back high-volume applications. Check out more on Aligned Layer's documentation for the nitty-gritty details.
Batching primitives you can adopt (and when)
Batching isn't just a one-size-fits-all solution; you’ve gotta choose the right approach that fits your specific constraints.
1) On‑chain Groth16 batch verification (random linear combination)
- Instead of saying “3 pairings per proof,” let’s switch it up to “n + 2 pairings total.” We can do this by mixing n proofs with some random scalars and checking just one equation. This way, the pairings drop from 3n down to about n+2. Sure, the MSM/ECMUL work goes up a bit, but it's still pretty cheap compared to the pairing costs. (fractalyze.gitbook.io)
- Here’s a rough take on the gas model:
- Pairings: 45k + 34k·(n+2) gas.
- ECADD/ECMUL: 150/6,000 gas each where applicable (thanks, EIP‑1108!).
- Calldata: This will scale linearly with the number of proofs and public inputs. (eips.ethereum.org)
Here’s a handy rule of thumb: when you're at n=100, you're looking at about 3.51 million gas for pairings (which works out to roughly 35.1k per proof). On top of that, you might add a few thousand more for MSM/calldata. So, you usually end up around 40k to 50k gas per proof. That’s a huge drop--over 80%--compared to the single-proof cost of 220k. If you're using Polygon PoS, that’s about $0.01 per proof, based on current gas prices. Check out more details here: (eips.ethereum.org).
When to use: You should go for “on-chain math” if you don't mind having no extra trust involved and can handle a bit of growth in proof payload as well as some coding work to make safe batching work smoothly.
Caveats and Guardrails
- To ensure soundness, make sure to bind randomizers to the batch context. It's a good idea to include domain separators and nonces to prevent malleability. You can read more about this here.
- Watch out for gas spikes from calldata, especially when dealing with large n. It’s important to stay updated on changes in calldata pricing, since discussions around EIPs 7623 and 7976 are tightening the floors for data-heavy transactions over time. For more details, check out this link: EIPs.
2) Off‑chain recursive aggregation → O(1) on‑chain verification
- You can combine multiple proofs into a single recursive proof that gets verified on-chain just once, like with the Halo2-KZG stack. Most setups end up using about 350k gas for each aggregated verify, and this amount doesn't really change much no matter how big your batch is. (blog.nebra.one)
- For instance, in NEBRA’s UPA, the gas cost for submitting each proof on-chain is around 20k, plus there's about 100k in overhead for each batch. After that, you hit a final verify that costs roughly 350k gas. So, if you’re working with a batch size of 32, you’re looking at a total of about 0.73M gas, which breaks down to around 23k per proof. And if you bump it up to a batch of 100, it’ll run you roughly 2.35M gas, giving you about 23.5k per proof. (blog.nebra.one)
When to Use It
If you're looking to get the most bang for your buck with on-chain savings and can handle a few seconds of off-chain proving delays, this is the way to go. It's especially useful for:
- Oracle/state attestations
- Batched mints/claims
- Periodic settlement
Caveats
- Proving Latency Grows with n: It's important to establish that as n increases, latency tends to rise. This is crucial for understanding system performance under different loads.
- Parallelize Workers: Consider splitting tasks among multiple workers to boost efficiency. This can help in managing workloads better and keeping latency in check.
- Bound Batch Windows: Keep those batch windows within limits to make sure you meet your Service Level Objectives (SLOs). This helps ensure that you're delivering a reliable performance.
3) Signature aggregation for attestations (BLS)
- When it comes to your “proofs,” if they’re basically just attestations--like oracles, bridge votes, or validator signatures--you can aggregate those BLS signatures and just verify one single signature instead. Running two pairings (which costs around 113k gas) lets you check a large committee signature, and it’s often the most cost-effective way to handle feeds and bridges. (eips.ethereum.org)
When to Use
Data availability or attestation heavy flows are great to consider when the ZK is off-chain or isn't necessary all the time.
Worked examples with current Polygon numbers
Assumptions for All Examples Below
- Gas Price: 2,350 gwei
- POL: $0.128
- Block Gas Budget: ~45M
Feel free to tweak these numbers based on your own operational figures; the savings percentages should stay pretty strong, no matter the price changes. You can check out more details on gas tracking at polygonscan.com.
500 user proofs per minute (Groth16, 2 public inputs)
- If you're looking at a single proof, it’ll cost you about 500 × (220k + 2×7.16k) which comes out to roughly 131M gas. That's about 3 blocks worth, so each proof runs you around $0.066-$0.072. You can check out more about it here.
- Now, if you’re doing on-chain batch verification with 100 proofs five times a minute, you’re looking at roughly 3.5M gas for pairings and another ~0.3M gas for MSM/calldata, which totals about 3.8M gas per batch. This comes to around 19M gas per minute, meaning each proof costs about $0.009-$0.011. That's around 85% cheaper! More details can be found here.
- Lastly, there's recursive aggregation with five batches of 100 proofs each. This will hit about (5 × 2.35M) ≈ 11.75M gas total per minute, including around 350k for each verification. This brings the cost down to about ~23.5k per proof, which is roughly ~$0.006. That's over 90% cheaper, and you also get to free up some blockspace! Check it out here.
Operational impact:
- You’ll see blocks used fall from around 3 to under 1 per minute while keeping the same throughput. This means your queueing delay changes from “waiting 2-3 blocks” to “fits in one block with some extra space,” leading to a smoother user experience even during those busy spikes.
50 oracle feeds each second, committee‑signed (BLS)
- With the straightforward ECDSA setup per feed, you're looking at 50 signature verifications plus some storage writes, which can rack up hundreds of thousands of gas per second.
- On the flip side, using BLS aggregation, you only need about ~113k gas to verify the entire set in a second. Even if you keep track of things per feed, you can usually maintain a steady state of under 250k gas per second. That's a huge leap forward in cutting down the verification overhead! (eips.ethereum.org)
Example C -- Cross‑rollup settlement with a zkEVM proof
- When you verify or settle state to Ethereum L1, the post-Dencun EIP-4844 blob pricing slashes L2 DA costs significantly--like, by a whole lot. It just keeps improving with higher blob targets too. So, your L1 leg becomes more affordable, and if you batch on Polygon for L2-side verification, those savings pile up even more. Check it out at (eip4844.com).
How EIP‑4844 and evolving calldata pricing interact with batching
- So, on March 13, 2024, Dencun rolled out blobs for rollup data availability on layer 1, which really cuts down the posting costs for layer 2 solutions. And guess what? Future updates are set to boost blob capacity even more. If your Polygon setup interacts with Ethereum for settlements, you're getting this advantage as a bonus! Check it out here: (eip4844.com).
- At the same time, there’s a lot of talk about calldata on the EVM (with EIP‑7623 discussions happening live and a draft for EIP‑7976 in the works). The main goal here is to steer clear of those super data-heavy transactions. So, batching that cuts down on calldata per proof (or skipping calldata altogether through recursion) is definitely a smarter choice for the future. Dive into the details here: (eips-wg.github.io).
Best emerging practices we see working in production
Contract-level
When we talk about contract-level in this context, we’re diving into the nitty-gritty of how contracts are structured and what they mean for everyone involved. Essentially, it’s where the details really matter.
Key Aspects
Here are some of the main points to consider at the contract level:
- Definitions: It’s super important to know exactly what terms mean. A well-defined contract leaves no room for confusion.
- Obligations: Each party’s responsibilities should be crystal clear. This helps avoid misunderstandings later on.
- Terms and Conditions: These are the rules of the road. They should be explicit so everyone knows what to expect.
- Amendments: Sometimes things change. Contracts should have a way to handle updates or modifications.
- Termination: Knowing how and when a contract can be ended is crucial. Everyone should be aware of the exit strategy.
Example of a Contract-level Provision
Here’s a simple example of what a contract-level clause might look like:
### Confidentiality Clause
Both parties agree to keep all sensitive information confidential and shall not disclose it to any third parties without prior written consent.
Why It Matters
Understanding the contract-level details isn’t just for lawyers; it’s important for anyone who’s signing on the dotted line. Clear contracts help protect interests and ensure everyone is on the same page. Ultimately, it’s all about creating a smoother working relationship.
For more insights, check out this article.
By paying attention to these nuances, you can navigate the world of contracts with much more confidence.
- Consider hardcoding the verification key (VK) directly into immutable storage or bytecode. This little trick helps dodge SLOAD costs and cuts down on calldata size. Plus, pack those points tightly and steer clear of any redundant limbs. Since pairing precompile calls are a major factor, it's a good idea to keep things around them as minimal as possible. (eips.ethereum.org)
- For batch Groth16, give the random linear combination technique a shot with domain-separated challenges. Think about using the keccak of the batch ID, block hash, and a commitment to all proofs. This approach helps prevent any mix-and-match malleability. (fractalyze.gitbook.io)
- When it comes to per-proof receipts, try using event logs instead of relying on persistent SSTORE unless you really need on-chain state for gating purposes. Keep in mind that the first-write SSTORE still costs a hefty 20k gas; events are a lot cheaper and can be indexed off-chain! (ethereum.org)
- If your process is really heavy on signatures, it might be time to switch to BLS aggregation on BN254. It only takes two pairings! And don't forget to keep an eye on BLS12-381 precompile efforts (EIP-2537) when they come around, as they could boost your security margins. (eips.ethereum.org)
Batch Scheduler and Ops
Batch scheduling is super important for managing tasks in a way that maximizes efficiency and minimizes downtime. Whether you’re running a small operation or a huge data center, getting this right can make all the difference.
What’s a Batch Scheduler?
A batch scheduler is basically a tool that automates the execution of jobs and processes in batches rather than individually. This means you can queue up tasks to run at specific times without needing someone to manually kick them off.
Key Features
- Job Prioritization: You can set which jobs are more important, so the most critical tasks always get done first.
- Resource Management: It manages how resources like CPU and memory are allocated to each job, helping avoid bottlenecks.
- Monitoring: Keep an eye on job statuses, and get alerts if something goes wrong.
Benefits of Using a Batch Scheduler
- Efficiency: Automating batch processes reduces the workload on your team, allowing them to focus on more important tasks.
- Consistency: Scheduled jobs run the same way every time, which helps ensure your processes are reliable.
- Cost Savings: By optimizing resource usage, you can save on operational costs related to cloud services or on-prem systems.
Popular Batch Scheduling Tools
Here are some popular tools you might want to check out:
- Apache Airflow: Great for complex workflows and built on Python.
- Kubernetes CronJobs: If you're already using Kubernetes, this is a no-brainer for managing scheduled tasks.
- Slurm: Often used in high-performance computing, it’s perfect for managing large job submissions.
Getting Started
Here’s a quick rundown of what you need to get your batch scheduler up and running:
- Define Your Jobs: Clearly outline what needs to be automated.
- Choose Your Tool: Based on your needs, pick one of the tools mentioned above.
- Set Up the Environment: Make sure all necessary dependencies are in place.
- Schedule and Monitor: Start scheduling your tasks and keep an eye on their performance.
Conclusion
By implementing a batch scheduler, you can vastly improve your operational workflows, boost productivity, and ultimately save time and money. If you're looking to streamline processes in your organization, diving into batch scheduling is definitely a step in the right direction. Happy scheduling!
- To figure out the batch size, you can start with this simple formula: per_proof_gas ≈ per_proof_overhead + batch_fixed_gas / batch_size. You'll want to solve for your desired per_proof_gas while keeping an eye on block gas and latency SLOs. It's a good idea to limit your batches to a certain percentage of the current block gas (maybe around 40% or less) to avoid any reverts during spikes. Check out more details on this over at (blog.nebra.one).
- Don’t forget to parallelize your proving and aggregation workers. It’s also smart to have a “flush on timeout” option in place so that any late proofs won’t hold everything up. If you’re working with multi-tenant aggregators, consider sharding your queues by customer. This way, you can keep those tail latency bounds nice and tight. More info can be found at (blog.nebra.one).
- Keep a close eye on your metrics: batch fill rate, prove time (specifically p50/p95), per-proof gas, revert rate, calldata bytes per proof, and blocks-to-finality. Polygon has rolled out some updates aiming for around 2-5 seconds of deterministic finality, so make sure to confirm against the finalized tag in your RPC before you decide to release any funds. You can read more about it in the docs at (docs.polygon.technology).
Security and Correctness
When it comes to software development, two of the biggest priorities are security and correctness. Let's dive into what these terms mean and why they're so important.
What is Security?
Security refers to the measures taken to protect a system from malicious attacks. This includes safeguarding data, user privacy, and the overall integrity of the application. A secure system is one that can withstand unauthorized access and prevent data breaches.
Key Points:
- Confidentiality: Ensuring sensitive information is only accessible to authorized users.
- Integrity: Making sure the data hasn’t been tampered with during transmission.
- Availability: Ensuring that the system is always up and running when users need it.
What is Correctness?
Correctness is all about making sure that the software behaves as intended. This means that the application should produce the right outputs for given inputs and handle errors gracefully. An application can be secure but still incorrect, leading to frustrating user experiences or even worse, financial loss.
Key Points:
- Functional Correctness: The system should perform its intended functions accurately.
- Error Handling: The application must manage unexpected situations without crashing.
- Performance: It should operate efficiently without unnecessary delays.
The Balance Between Security and Correctness
Finding the right balance between security and correctness can be tricky. Sometimes, adding security features may impact the application's performance or usability. On the other hand, a strictly correct system may leave vulnerabilities that can be exploited.
Strategies to Achieve Balance:
- Code Reviews: Regularly review code for both security vulnerabilities and correctness.
- Automated Testing: Use testing frameworks to catch errors and potential security issues early on.
- Continuous Monitoring: Monitor the application post-deployment to catch any unforeseen issues.
Conclusion
In the end, both security and correctness are crucial for creating reliable software. By keeping them in mind throughout the development process, you can help ensure that your applications not only function well but are also protected against threats.
For more detailed guidelines on achieving security and correctness, check out these resources:
Stay safe out there!
- Make sure to tie proofs to the specific application context--like chain ID, contract address, and batch ID--either right in the statement or through a transcript. This way, proofs can’t be replayed across different batches or chains. You can read more about it here.
- If you're looking at any verification paths that aren't EVM-based (think off-chain verification networks or AVSs), it's super important to lay out the trade-offs between trust and latency. Some tech stacks can pull off verification in just milliseconds using BLS-aggregated attestations on L1 for around ~350k gas per batch and ~40k gas per proof for a batch size of 20. Keep this in mind when you need speed over cryptographic minimalism. Check out the details here.
Cost tables you can budget against (Polygon PoS, Jan 2026 snapshot)
- For a single Groth16 verification with 2 public inputs, you’re looking at around 220k to 234k gas, which translates to about 0.52 to 0.55 POL, or roughly $0.066 to $0.070. You can read more about it here.
- When it comes to batch Groth16 (with n=100 and on-chain combining), it costs about 3.8M gas per batch, breaking down to approximately 38k per proof. That’s around 0.089 POL per proof, which is about $0.011. The savings here are pretty impressive, sitting at roughly 83-85%. Check it out here.
- Moving on to recursive aggregation using the Halo2-KZG style, you’ll spend about 2.35M gas for a batch of 100 proofs. That means you're at around 23.5k per proof, or 0.055 POL, which is about $0.007. The savings really stack up, hitting around 90-92%. More details can be found here.
- Lastly, with BLS aggregation for committee attestations for any N, it’s a flat cost of about 113k gas just for signature verification (plus a bit for bookkeeping). Even when handling dozens of feeds, you’ll often stay below 250k gas total, and the savings keep increasing as N grows. For more info, head over here.
Just a heads up: your total costs in dollars will change along with gas and POL prices, but the percentage savings will stick around because they’re built-in.
Migration blueprint we recommend
- Take stock of your proof inventory
- How much of it is made up of zk‑SNARKs compared to attestations? What’s the size of the payloads for each proof, and what about the public inputs? Be sure to pinpoint those “fast path” flows that need immediate processing and can't afford to wait for batch windows.
2) Choose Your Batching Lane for Each Flow
- If you've got a zk-proof that's pretty hefty and can handle a few seconds of lag, go with recursive aggregation.
- For those high-frequency attestations or feeds, you might want to switch things up and use BLS aggregation.
- And if you're looking for some serious cryptographic minimalism right in EVM? Try out random linear combination batching for Groth16.
- Right-size batch windows
- Kick things off with 250-500ms windows for feeds (BLS), and aim for 2-6s windows when you're doing recursive aggregation. Also, don't forget to set a hard cap at 35-40% of the current block gas for each batch submission. And remember, when things get congested, use dynamic windows that can tighten up.
- Cut Down on Calldata and Storage
- Pack your proofs and inputs, steer clear of duplicate limbs, and when you can, go for events instead of SSTORE receipts. Also, keep your Verification Keys (VKs) immutable and stashed in bytecode to help reduce SLOAD.
- Prove and Verify Safely
- Challenge Derivation: Keep your batches domain-separated and make sure to commit to the proof list before diving into randomizers.
- Reorg/Finality: When writing flows, stick to finalized blocks (look for the “finalized” tag) on Polygon PoS. This helps you dodge those rare finality glitches. Check out more in the Polygon documentation.
- Measure and Iterate
- Keep an eye on per-proof gas usage, the p95 latency, and what’s causing reverts (like those pesky out-of-gas issues from sudden gas spikes or underestimating calldata). Adjust your batch windows as needed to meet your SLOs.
Frequently asked objections
- “Polygon is already affordable--so why bother with batching?”
Well, it’s all about having some breathing room and ensuring a smooth user experience as you grow. Single-proof flows can max out on pairing limits and hit the block gas ceiling. By batching, you can significantly boost your throughput and keep fees steady even during those pesky short-term gas spikes. (docs.alignedlayer.com) - “Will batching make our system less secure?”
Not at all, as long as you do it right. Batch Groth16 relies on some solid randomness techniques, and recursive aggregation has proven its worth in rollups. If you decide to go with an external verification layer, make sure to document its security model (like a restaked BLS quorum), and choose it wisely for the appropriate flows. (fractalyze.gitbook.io) - “What’s the scoop on future EIP changes?”
Looks like the trends are leaning towards lower Layer 1 data availability through blobs and stricter calldata floors. If we focus on batching that cuts down on calldata per result (or pushes it into blobs/recursion), that approach is likely to hold up well in the long run. Check it out here: (eip4844.com)
Bottom line
- Today on Polygon PoS, switching from single-proof verification to high-throughput batching can save you a whopping 70-95% in gas fees for each proof you process. Plus, it can boost your verification throughput by 5-10 times without sacrificing any cryptographic security, as long as you pick the right mechanism. Check out more on this here.
- If your setup settles on Ethereum L1 (like zkEVM/appchains), the blob market introduced in EIP-4844 makes the savings even sweeter on the settlement side. Basically, you end up with “cheaper L1, cheaper L2” when you go for batching. You can learn more about it here.
If you're looking for our team to figure out your precise savings, we’ll take your trace data and run it through some models. In return, you'll get a tailored batch plan that includes specific gas and latency SLOs, along with contracts that are all set to go and a pipeline for proving and aggregation.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

