7Block Labs
Blockchain Development

ByAUJay

So, here’s the deal: over the next year or two, a lot will depend on how well rollups can ramp up their proof throughput. It's going to be a crucial time! It's not just about whipping up proofs in a flash; it's also about how easy it is on the wallet to share data, check results, and deal with those annoying fee spikes.
This guide brings together some real-world, practical tips for building provers, selecting aggregation strategies, and creating sustainable fee policies. We're diving into everything from Ethereum L1 (you know, the blobs) to AVS verification layers and other decentralized availability options.

Best practices for future-proofing rollup proof throughput: Designing for provers, aggregation, and fees

Hey there! So, if you’re in the startup scene or working at a big company, you might be asking yourself a pretty important question: how can we avoid getting tripped up by proving or fee spikes as we scale up? It’s a common concern! Thankfully, by 2025, we should have some clear answers, but here’s the kicker--you’ve got to start planning for this right from the beginning!

Imagine proving as a versatile service that multiple users can tap into, all while sticking to clear service level objectives (SLOs). Feel free to really dive into recursion and aggregation to make on-chain verification a lot smoother. Don't hesitate to get creative with it! Make sure to tap into the blobspace economics of Ethereum following Pectra, and get set for PeerDAS!

  • Stay flexible: Make sure you can choose to settle directly to L1, go through AVSs, or even switch DA backends whenever you need to, all without any downtime.

Let’s break down some handy tips that come with real numbers you can use for budgeting.


1) Design your proving layer like a product, not a script

Your prover is like a service that’s always on, and it comes with some performance targets to hit. Just keep that in mind and handle it accordingly.

  • Define SLOs: Alright, let's break this down. We need to nail down a few key things: what's the longest we can wait for proof latency in each batch, how many proofs should we be turning out every minute at a minimum, and what’s the highest queue depth we can handle for each class of circuit? We’re going to roll out some Prometheus metrics for you! You can look forward to things like job_wait_seconds, prove_latency_p95, and queue_len_by_circuit.
  • Keep "hot" and "cold" circuits separate: So, when we talk about "hot" in this context, we’re referring to your block-level circuits and state transition circuits, as well as any of those recursive wrappers you might have. They really should make sure to stick to the block time SLOs every single time. It's important!
  • Cold: This category covers some pretty hefty analytics tasks, tackles issues like fraud or finality challenges, and also throws in the occasional coprocessor jobs. They can get by with less expensive hardware, so they don’t really need to be at the top of the priority list.

Hardware mix and acceleration

If you’re diving into modern tech stacks, definitely keep an eye on GPUs. They’re going to be your go-to for boosting speed in 2025, especially since the libraries we’re using are just getting better and better. Trust me, you’ll want to take advantage of that! Hey, have you checked out Plonky3 and Stwo? They're pretty impressive--they can handle between 500k to 2 million Poseidon hashes per second on just laptops! And if you're using server GPUs? Wow, they can really take things to the next level. This can definitely save a lot of time when it comes to proving those hash-heavy circuits. (polygon.technology). Ingonyama’s ICICLE is bringing some exciting GPU power to the table with MSM/NTT. It’s not just talk, either--there's real backing behind it and actual users in production who are benefiting from it. It's a great way to dive into GPU acceleration without the hassle of totally revamping your entire setup. (ingonyama.com).

When you're considering capacity, it's helpful to look at it in terms of “proofs per dollar.” According to the benchmarks from Succinct’s SP1, if you go with their GPU prover, you could actually save around 10 times on cloud expenses compared to other zkVMs, especially for standard light-client or EVM tasks. Pretty impressive, right? In simple terms, that means we're looking at something that's basically zero. It's about a penny per transaction when you look at the average Ethereum blocks. Think of this as a helpful starting point for your budget planning, but make sure to do your own checks based on the circuits you're working with. It's always a good idea to tailor things to your specific needs! (succinct.xyz).

  • Make sure to stay on top of your prover pipeline and keep it organized. It’ll help you out in the long run! Sure! Here’s a more casual take on that:
  • Stage 1: So first up, we’ve got witness generation, which is pretty CPU-intensive. It really puts the processor to work!
  • Stage 2: Next, we dive into FFT/MSM/commit. This is where the GPU steps in and starts flexing its muscles.
  • Stage 3: Lastly, we wrap things up with recursion and wrapping. This stage often involves making the shift from STARK to SNARK. Quite a ride, right? It's definitely a smart move to keep these steps separate. That way, you can tweak and scale each one on its own without getting bogged down by any one bottleneck.
  • **Coalesce small jobs. A bunch of zkVMs come with a set overhead. As SP1 mentioned, if you've got smaller programs that are under 2 million PGU, they might get bogged down by those fixed costs. So, it’s worth thinking about batching them together or using a recursive accumulator to prove them.
    (docs.succinct.xyz).

Operational controls

  • Admission Control: It's a good idea to cap the number of PGU/cycles and concurrent usage for each tenant. This way, you can steer clear of those pesky head-of-line blocking issues! If you're diving into RISC Zero Bonsai, it's worth noting that it sets some limits on how many proofs you can run at the same time and the number of cycles each proof can use. If you choose to handle things yourself, just replicate those controls. Take a look at this: dev.risczero.com. It’s worth checking out!
  • Pre-emptive Scheduling: Make sure to carve out some dedicated GPU time just for the recursive wrapper. This handy little trick really helps keep end-to-end latency steady, especially when those unexpected spikes pop up.
  • Canary Recursion: Every so often, say every N blocks, we throw in an extra recursive proof just to keep things secure. Think of it like using a backup plan with a second stack, similar to the Groth16 versus Plonk-KZG setup. This way, you can spot soundness bugs in your main verification process right from the get-go. Some recent research in academia pointed out that there are some sneaky bugs popping up across different zkVMs. So, when you look at this variety, it’s actually a pretty clever way to keep things safe! If you're curious to learn more, check out the details right here: (arxiv.org). You'll find all the info you need!

2) Choose aggregation deliberately: recursion trees, SNARK-packers, or external verifiers

There are three key strategies you can use to reduce verification costs:

A. Recursive accumulation (STARK/FRI → SNARK/KZG)

If you’re looking to really boost your throughput today, a smart move is to crank out a bunch of leaf proofs and then run them through a recursion circuit for verification. It’s a solid strategy! Once you've got everything sorted out, you can bundle it all into a neat little Groth16 or Plonk proof for Layer 1. This approach can really help you cut down on gas fees--like, you could save millions! Depending on your wrapper and public inputs, you might only be looking at around 200k to 900k gas per batch. Pretty impressive, right? For more info, just click here to dive in!

Here's a handy tip for you: try to keep your public inputs brief. So, the price for the verifier actually increases depending on how many pairings and multi-scalars you use with BN254 or BLS12-381. Thanks to EIP‑1108, the cost for bn128 pairings has been lowered to 45,000 gas plus 34,000 times the number of pairings you’re doing (that's the k value). Pretty cool, right? So, usually, when it comes to Groth16 verifiers, they tend to use about 200,000 to 300,000 gas. When you’re working on your circuit designs, try to minimize public I/O whenever you can. It really helps streamline things! If you're looking for more details, you can check it out here.

B. Proof aggregation schemes (e.g., SnarkPack, aPlonk)

If you’re busy generating a ton of Groth16 or Plonk proofs for various statements, let me tell you, tools like SnarkPack can be a total lifesaver! You can check thousands of proofs really quickly, thanks to logarithmic time, which is great for keeping those gas costs nice and low. For example, Protocol Labs has successfully compiled a whopping 8,192 Groth16 proofs! They’ve got the verification times down to between 33 and 163 milliseconds when done off-chain. Pretty impressive, right? This method really stands out when using recursive engineering starts to get too expensive compared to simply throwing an aggregator into your workflow. (eprint.iacr.org).

  • If you play your cards right, you can keep your on-chain footprint pretty minimal. All you really need is one solid "super-proof" verification, which will set you back a few hundred thousand gas. Plus, you’ll just have to do a quick check for each proof, which isn’t a huge deal. In some real-world cases, people are seeing around 380k for the base setup, with about 16k for each time they make an inclusion call. Just a heads-up--when you’re budgeting, it’s a good idea to be smart about it and give yourself a little leeway. That way, when you’re testing against your curve and verifier, you won’t feel too constrained. (docs.electron.dev).

C. Off-chain verification via AVSs (Aligned Layer) with on-chain attestations

If you're using apps that support an AVS trust model, you should definitely take a look at Aligned's Proof Verification Layer. It’s a handy tool! It does some really cool off-chain verification of proofs using a set of restaked operators. Then, it sends the results over to Ethereum, all bundled up with aggregated BLS signatures. The savings are seriously amazing--like you could be looking at 90-99% less compared to straight-up L1 verification! As it stands, gas fees are in the tens of thousands for each proof, which is pretty hefty. But the good news is, it works with frameworks like Risc0, SP1, and Groth16/Plonk.
This is really awesome for anyone who has to deal with those regular or expensive verifications, like STARKs. (docs.succinct.xyz).

Absolutely, there’s a bit of a catch here: opting for this settlement route comes with the need to manage an additional layer of dependency, plus you’ll have to get familiar with an entirely new security model. A lot of projects are going for a hybrid strategy these days. They're using AVS when the workload gets a bit heavy, but when it comes to the final checkpoints, they’re switching back to direct L1 verification. It's like having the best of both worlds!

Decision Rule of Thumb

If you plan to settle every N L2 blocks and your wrapper's gas consumption stays below 500k, then definitely consider going for recursion (A). It could really work in your favor!

  • If you’ve got a lot of independent app proofs to check, you might want to consider using aggregation (B). It could make things a lot easier for you! Hey there! So, if you've got some pretty big proofs to check out--like those hefty STARKs--or if you find yourself needing to verify them quite often, it might be worth checking out an AVS (C) that has some reliable fallback options. Trust me, it could save you a lot of time and hassle!

3) Engineer your L1 verification for the chain you’re on: BN254 today, BLS12‑381 now viable

So, Ethereum's EIP-1108 really shook things up by slashing costs for the BN254 (alt_bn128) precompile. Pretty exciting stuff! So, the cost for pairing comes out to 45k plus 34k times k gas. This tells us that using BN254-friendly Groth16 is actually the cheapest way to handle verification on Layer 1. Pretty neat, right? Honestly, it’s a solid plan to use this as your outermost wrapper, unless you’ve got a really good reason to do otherwise. Take a look at this link: (eips.ethereum.org). You might find it interesting!

So, starting on May 7, 2025, Pectra introduced those BLS12-381 precompiles following the guidelines of EIP-2537. This really paves the way for some cool native and efficient BLS curves for on-chain verification. That means you've got more flexibility when it comes to your SNARKs and signature schemes! If you're already using BLS12-381 in your setup--like with your consensus tools, bridging, and light clients--it might be worth taking a moment to reassess your overall expenses. This precompile really takes away one of the biggest reasons to force everything into BN254. Dive deeper here: (ethereum.org).

Implementation Detail

Just a heads up--it's a good idea to keep your verification contract upgradable. You might want to set it up with a timelock for added security. Alternatively, you could go with an immutable setup that has a configurable verifier target. Either way, you’ll be in a better spot! With this setup, you can smoothly swap between the BN254 and BLS12‑381 verifiers without all the fuss of having to migrate your whole rollup contract suite. It's a real time-saver!


4) Blobspace is your friend--if you learn its quirks

After Dencun (EIP-4844) and Pectra were introduced, the whole Ethereum data landscape really went through a transformation.

Blobs are available in chunks of 128 KiB, and they have their own pricing system that works a bit differently, using the 1559 model. This unique setup is often referred to as "blob gas." "They hang out on consensus clients for around 18 days, but the only thing that really sticks around on L1 is the KZG commitment." If you're looking for longer retention, you might want to check out archive providers like Blocknative and Blockscout. They've really got your back on that front! Just a quick reminder to keep that 18-day limit in mind when you're designing your data retrieval and analytics. It's a good idea to plan around it! (eips.ethereum.org).

  • Capacity and costs: Dencun has been released, aiming for a target of 3 to 6 blobs in each block. This roughly translates to about 384 to 768 KiB per block. (digitalfinancenews.com). Hey there! Just a heads up: Pectra is set to roll out on May 7, 2025, and they’re really stepping things up. They've increased the blobspace target and maximum to 6/9 per block. Exciting stuff ahead! This change has pretty much doubled the target capacity, and it’s managed to keep blob fees low for a good while now. Hey, just a heads-up to ensure your batcher is all set to take advantage of this bigger target! Check it out here: ethereum.org.

Alright, so let’s break down blob fee dynamics. Picture the blob base fee as a seesaw. It rises when the demand is high and dips when it's low, kind of like how EIP-1559 works. It’s all about balancing out the usage! You can usually count on pretty steady pricing, unless there's a sudden surge from L2s. That said, it's always a good idea to stay alert for any unexpected price jumps. (blocknative.com).

Batcher configuration best practices (OP Stack, Arbitrum)

Stick with blobs as your first pick, but make sure you've got an "auto fallback" to calldata ready to go. You never know when blobs might be temporarily unavailable or a bit too expensive! After Dencun, both stacks are totally capable of handling blob posting. (docs.optimism.io).

  • Discover the perfect flow for your submissions:
  • Try to keep it within the MAX_CHANNEL_DURATION to make sure you get those regular, fully packed blobs. Aiming for something like 30 to 60 minutes works well for medium-throughput chains. Posting half-done stuff is like just tossing your money away! And if you don't post enough, it can really mess up those "safe" heads and ruin the user experience. (docs.optimism.io).
  • Multi-blob transactions:
  • Once you've got Pectra sorted out, you'll be able to combine as many as 9 blobs into one single block! How cool is that? Just be careful with those huge “mega-blob” single transactions, unless you've got some solid builder agreements sorted out. So, if you've got a lot of blobs in each transaction, you're increasing the chances that you'll have to swap some out. And just a heads up, when you do need to replace those blobs, it’ll cost you double in fees! Keep that in mind! Let’s kick things off with just 1 to 3 blobs for each transaction and take it slow from there. It’s all about easing into it! (docs.optimism.io).
  • Compression: Try using brotli-10 and make sure to pack your stuff neatly to those 131,072-byte boundaries. This little trick can save you from having to deal with an extra blob! Make sure to keep an eye on how much you’re using, and maybe do a quick mental check like, “Do I really need to add this extra transaction?” before you hit that post button. It could save you some hassle later on! (specs.optimism.io).

Budget update: So, ever since Dencun rolled out, we've noticed a significant drop in the average L1 gas fees. It seems like a lot of the L2 activity has moved over to blobs, which is kinda interesting! This has definitely cut down L2 user fees quite a bit. Definitely! The price of ETH and how people are using it can really fluctuate, right? But this shift is super significant. Blobspace is here to help keep data costs steady as we grow. It's a game-changer! (tradingview.com).


5) DA optionality: EigenDA, Celestia, Avail

It's a good idea to keep your options open when it comes to different DA backends. You never know what might work best for you! Even if you're using Ethereum blobs right now, keep in mind that changes in price, demand, or how things integrate in the future could make switching a smart move.

  • EigenDA (AVS on EigenLayer): So, the mainnet finally launched in 2024, and then in 2025, it received a sweet upgrade that boosted its throughput. Pretty exciting stuff! They've got an impressive capacity of 100 MB/s, and it’s pretty cool that they’re already working with some well-known names like Fuel and Aevo. If you're in the market for some solid restaked security and high throughput while keeping a close connection to Ethereum, this could be a great fit for you! (coindesk.com).
  • Celestia: This one has really gained traction as a favorite for data availability. It’s especially popular for those looking to save on costs. Manta Pacific and a whole bunch of SDKs are already on deck! It's definitely something to think about! If keeping those DA costs down is more of a priority for you than having that L1-native settlement, it might be a good move to explore. (theblock.co).
  • Avail: Expected to launch on the mainnet sometime between 2024 and 2025, Avail is focused on being chain-agnostic. It’s using KZG+sampling and is building a solid community of validators, which is pretty exciting! If you're looking for something that you can easily use across various systems, it might be a good idea to take a look at a proof of concept. It's definitely worth your time! (coindesk.com).

Design Tip

Have you thought about simplifying your DA publishing path by using an interface? It could really streamline things for you! So, with this approach, you can keep using the same L1 commitment schema, or even throw in an adapter if you want. That way, your L1 contracts and watchers won’t have to change at all, even if you decide to mix things up with your DA.


6) Verification frequency and fee model that won’t paint you into a corner

When to verify on L1

  • Look for a quick proof either every M seconds or every N blocks--whichever comes first. To keep things running smoothly, try using traffic-aware thresholds. This way, you can collect enough transactions to help balance out the costs. Just a heads up, be mindful of the latency. It's important to make sure that both bridges and exchanges are happy, usually aiming for around 2 to 5 minutes during regular operations. If you’re noticing that blocks are coming in really slowly, it’s a good idea to switch things up and try time-based sealing instead. This way, those blobs will stay nice and full, and you won't have to deal with any annoying delays when it comes to settling things up.

L2 fee policy hygiene (zkSync as a reference)

Alright, let’s take a closer look at the fees. We can split them into two parts: the L2 compute costs and the L1 costs, which cover things like pubdata and verification. We can totally set up a “gas_per_pubdata_limit” to help control how much we charge for L1 data with each transaction. That way, if blob fees suddenly shoot up, you won’t have to worry about getting hit with an unexpected bill. We've put this approach to the test, and it really delivers in real-world situations. (docs.zksync.io).

Let's also think about charging batch overhead based on how much resources we really use. Things like bootloader slots, memory, and pubdata bytes could be a good starting point. This approach not only gets everyone on the same page with their goals, but it also helps maintain stable economics, even when demand goes up and down. (docs.zksync.io).

Concrete cost anchors for L1 verification

  • Groth16 on BN254: You can expect gas costs to hover between 200k and 300k on average. If you want a better estimate tailored to your specific k pairings and EC operations, take a look at the EIP‑1108 formula. It can really help! (eips.ethereum.org).
  • Plonk-KZG: So, when you're diving into this, you can expect gas fees to land somewhere between 600k to 1M. It really depends on how you set it up and what public inputs you're working with. Be sure to eliminate any extra input/output that isn’t needed so we can keep our costs in check. (blog.zkcloud.com).
  • STARK direct verification: If you’re looking for a rough estimate, you can expect around 5-6 million gas for each proof. These days, a bunch of teams are getting creative with SNARK-wrapping to cut down on their on-chain costs. (blog.lambdaclass.com).
  • AVS Verification (Aligned): This really is a game changer! It cuts costs down to about 90-99% cheaper per proof. How awesome is that? Currently, we're noticing gas prices hitting the tens of thousands per proof, particularly when it comes to BLS aggregation on L1. Just a friendly reminder to include both L1 gas fees and any off-chain verification costs in your calculations. It’s an important detail that can really impact your results! (docs.succinct.xyz).

7) Reference architectures you can ship

A. High-throughput zkEVM with recursion and blob-first DA

  • Prover: So, we’re diving into a pipeline that really leans on GPUs, and we’re using leaf circuits for each block segment. So, we've got Plonky3/STARK hanging out at the leaves, and on Layer 1, there's a Groth16 wrapper sitting on BN254.
  • Aggregation: The system uses a two-level recursion tree for each batch, and it usually takes about 2 to 4 seconds to handle the aggregation on mid-range GPUs.
  • Data Availability: We're focusing on Ethereum blobs and hope to reach 6, with a maximum of 9 after the Pectra update. So, when the utilization hits over 90%, the batcher will send out anywhere from 1 to 3 blobs with each transaction, and this happens every 2 to 5 minutes. It's a pretty smooth operation! If it drops below that, we’ll just seal it up after 5 minutes.
  • Layer 1 Verification: So, when it comes to verification, we’re using this Groth16 verifier. It’s pretty efficient, only dealing with a maximum of 4 pairings and keeping public inputs nice and low. In terms of gas costs, you’re looking at roughly 200k to 250k. Not too shabby! If you're looking for more info, just click here: (eips.ethereum.org).

Why It Works

  • Blob Costs You Can Count On: You pretty much know what you're getting into with those blob costs.
  • Minimal Verifier Gas: You hardly need any gas for verification, which really helps keep everything running smoothly.
  • Quick Settlements: Everything gets wrapped up pretty fast, meaning you won't be stuck waiting around.

B. STARK rollup with SNARK‑wrapping + AVS fallback

  • Prover: We're working with STARK leaf proofs and throwing in a regular SNARK wrap (Plonk-KZG) for our Layer 1 setup.
  • Aggregation: If you’re dealing with a bunch of different app proofs, SnarkPack is definitely your best bet. But if you’re looking for something simpler, recursion will get the job done just fine!
  • Verification: For quick checks during the day, AVS (Aligned) does an awesome job. But when it comes to those final checkpoints every few hours, it’s best to stick with a direct Layer 1 verification. (docs.succinct.xyz).

Why It Works

This method keeps our per-proof costs low, even during peak demand, while making sure we reliably hit that hard L1 finality every time.

C. zkVM coprocessor rollup with external proof markets

  • Prover: We've got SP1/Risc0 teaming up with GPU provers to create a decentralized proving market. Definitely take a look at the Succinct Prover Network and RISC Zero Boundless--there's some exciting stuff happening there! You can totally set up quotas and decide on pricing for those bursts. More info here: (succinct.xyz).
  • Aggregation: So, we're working with a periodic recursive accumulator that's locked down to L1. Oh, and we’re considering rolling out “priority proving” for those big-ticket transactions with SLA pricing. If that sounds like something you’d be into, let us know!
  • DA: So, for now, we're sticking with blobs. The cool part about the DA abstraction is that it lets us easily transition to EigenDA or Avail if the economics shift in the future. If you're interested, you can find all the details right here: coindesk.com. It should have everything you need!

Why It Works

At the end of the day, it really boils down to two main concepts: having flexible capacity by using open markets and maintaining a reliable on-chain presence because of recursion.


8) Don’t forget the blob lifecycle and data ops

  • Retrieval window: Just so you know, those blobs will be gone about 18 days after you add them--yep, that's 4096 epochs if you're keeping track! Just a quick heads-up--make sure your provers, full nodes, and analytics backfills are all pulling data that's up to date, specifically up to October 2023. It’s super important to keep everything in sync! If you're looking to keep that access for the long haul, you might want to think about signing up for an archival service. Services like the Blocknative Blob Archive API or Blockscout could be a great fit for you!
  • Observability: Make sure to track how much you're using blobs (measured in bytes per 131,072), check out what you're spending on blob gas, and keep an eye on how many batches end up spilling over into that extra blob. It’s important to stay on top of these details! These KPIs are great tools for fine-tuning your compression and timing, so you can get everything running a lot more smoothly. Hey! If you’re interested, you should definitely take a look at the details on Optimism Specs. There’s some cool info there that you might find useful!
  • Get Ready for PeerDAS: Pectra has set the stage by hitting those higher blob targets (6 out of 9). With PeerDAS, we're excited about the potential for even greater heights, all thanks to client-side sampling! When you're putting together your batcher, try to see the target and max limits as flexible settings instead of just hard numbers. It's super helpful to play around with those parameters in your staging environment to really get a feel for how they work. If you're curious to learn more, just check out Ethereum's roadmap. It’s a great resource for all the details!

9) Worked examples: plug numbers into your roadmap

  • Example 1: Picture this: you’re finalizing a transaction every three minutes with a Groth16 wrapper that consumes about 230,000 gas. Pretty wild, right? So, if you’ve got a gas limit set at 45 million and the average base fee plus tip is around 3 gwei, every time you settle, you’re looking at a cost of about $0. 00069 ETH. So, if you go ahead and group together 10,000 L2 transactions, you're looking at cutting costs down to about zero. Pretty neat, right? It's 0.000000069 ETH for each transaction, but that doesn't include the availability of blob data. So, when you factor in blob DA along with the basic blob fees and maybe throwing in 1 or 2 blobs per batch, you're typically looking at just a tiny cost--usually only a fraction of a cent. It's a good idea to have a 5× surge buffer set aside for those surprise spikes that can pop up. You never know when something unexpected might happen! (eips.ethereum.org).
  • Example 2: So, if you're checking out a STARK proof that gets verified right on Layer 1, expect to shell out about 5-6 million gas. With a rate of 3 gwei, that comes out to roughly 0. 015-0. 018 ETH is definitely a bit too pricey for regular checks, don't you think? Instead of that, you could try SNARK-wrapping it to bring it down to about 700k gas. Or, if you prefer, use an AVS to cut it down to around 40k gas. Then, you can just checkpoint to L1 once an hour. That should help keep things smooth! (community.starknet.io).
  • Example 3: So, when you're dealing with blob packing, you’ll find that the typical batch usually compresses to about 180 KiB. If you just upload each batch exactly as it is, you might end up losing around 52 KiB for every blob. How about this? Let’s combine those two batches, which is about 360 KiB, into three blobs aiming for around 384 KiB. We can use brotli-10 to compress it and make it nice and snug. Then, we’ll post updates every couple of minutes. Sounds good? Make sure to keep an eye on how much you’re using and tweak the MAX_CHANNEL_DURATION as needed. Aim to keep that fill rate at a solid 90% or higher! (docs.optimism.io).

10) Security and upgrade playbook

  • Guardrails during upgrades: Hey! So, when you're in the process of launching new provers or tweaking that recursion code, it’s super helpful to run what we call “shadow proofs.” Basically, these are like parallel proofs that you don’t actually use for settlement. They let you test things out without any real risk. It's kind of like how the Boojum team has been working in stages. Hey, just a heads-up! Make sure you wait to promote it until you've got live results matched over a few days. It’ll make a big difference! (theblock.co).
  • Circuit/VM diversity: It's definitely a good idea to have a backup verification method for your key circuits, like using a different library or curve. So, it turns out that even zkVMs that have gone through audits have discovered some bugs related to soundness and completeness. That just goes to show how important it is to have a variety of approaches out there. It can really help keep any problems in check! (arxiv.org).
  • Verifier upgradability: Hey there! So, if you're diving into the EIP-2537 BLS12-381 precompiles, make sure you have a game plan for migrating your verifiers. It’s a good idea to set up a time-locked upgrade and definitely keep everyone in the community updated along the way. Communication is key! Hey, just a quick reminder: make sure to test for gas and check everything’s working right on the Sepolia or Holesky forks before you go live on the mainnet. It's super important to get that sorted! (ethereum.org).

11) A crisp checklist for CTOs and heads of product

  • Provers
  • Create a GPU-powered pipeline that clearly outlines service level objectives (SLOs) and establishes specific quotas for each tenant.
  • Build a recursion wrapper that has fewer public inputs and make sure to keep the gas cost below 500k.
  • Give canary recursion a shot on a different stack in production.
  • Aggregation/Verification Alright, we need to figure out if we’re going to use recursion, go for that SnarkPack-style aggregation, or stick with AVS. Let’s make sure we jot down any backup plans we might come up with too! Hey team,

After the Pectra update, let’s work on modifying the verifier contracts so they can handle a swap between BN254 and BLS12-381. This will make our system a lot more flexible and adaptable moving forward. Thanks!

  • DA and Batcher
  • Let’s kick things off with blob-first posting, but we should have a backup plan ready for calldata just in case. We need to time it right so we can ensure that at least 90% of the blobs are filled.
  • First off, let's put a limit on multi-blob transactions, maybe stick to 1-3 for now. We should also brainstorm a solid replacement policy and come up with a strategy for dealing with fee doubling.
  • Set up a blob archival subscription that ensures you can access your data within 18 days.
  • Fees Sure! Here’s a quick breakdown of the fees involved:
  1. L2 Compute Fees: These are the costs associated with processing transactions on the Layer 2 network. Think of it as the fee you pay for the work done to handle your transactions quickly and efficiently.
  2. L1 Pubdata/Verification Fees: This part covers the fees for public data storage and verification on the Layer 1 blockchain. It's kind of like an insurance policy ensuring everything is in order and valid at the base level.

If you need more details on any of these or want to dive deeper into the numbers, just let me know!

  • Go ahead and set up the gas_per_pubdata_limit, and just double-check that the batch overhead allocation is all in order.

Once you ship this, you'll get a rollup that flexes the proofs and fees depending on demand, rather than fighting against it.


Further reading and sources

Alright, let’s jump into EIP‑4844! It talks about blob mechanics, how pricing works, and there’s even an 18-day retention period to consider. There’s some really cool stuff happening as Dencun evolves into Pectra! For starters, the blob targets are shifting from 3/6 to 6/9, which is a big leap. Plus, there are some fresh base fee rules for blobs that are worth checking out. Exciting times ahead! Take a look at this: (eips.ethereum.org). You might find it interesting!

Alright, let's dive into EIP-1108! This one’s all about gas fees related to the alt_bn128 precompile. It’s pretty handy for figuring out the costs involved in verifying Groth16 and Plonk, as well as the usual pairing calculations we often deal with. If you want to dive deeper, just click here: (eips.ethereum.org).

Have you ever thought about how STARK and SNARK verification costs stack up on Ethereum? It's pretty interesting stuff! Plus, it shines a light on the advantages of off-chain verification using AVS (Aligned). If you're looking for more info, check this out: (community.starknet.io). It’s a great resource!

Hey, I just stumbled upon some really cool insights about the Succinct SP1 GPU prover economics and its recursion stack. Plus, there are a few practical cost anchors we should definitely keep in mind. Hey, if you're into the nitty-gritty tech stuff, you might want to take a look at this: (succinct.xyz). Trust me, it’s worth your time!

Make sure you check out the OP Stack and Arbitrum blob posting setup, along with how they actually operate. You won’t want to overlook this! If you're looking for the latest info, just click right here: (docs.optimism.io).

Hey there! If you’re into blob archival, you’ll be happy to hear that the Blocknative Blob Archive API is officially live now. Plus, Blob support is also available on Blockscout. Check it out! Check out all the info right here: Blocknative Blog. You won’t want to miss it!

Alright, before we wrap things up, let’s chat about a few alternatives to DA. First up, we’ve got the EigenDA mainnet and its impressive throughput. Then there's Celestia, which is really starting to catch on--it's exciting to see that! And, of course, we can't forget about the Avail mainnet. So, let’s dive into those! If you want to stay in the loop, definitely check this out: (coindesk.com). It's got the latest scoop!

At 7Block Labs, we’re all about helping you nail down your circuit benchmarking on GPUs. We’ve got you covered! We can definitely help you figure out your blob and verification costs, just like we did with Pectra. We can totally help you set up a reliable production-quality recursion and batcher pipeline. This way, you’ll be able to scale your proof throughput with confidence.

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.