7Block Labs
Blockchain Technology

ByAUJay

Any Best Practices for Future-Proofing a Rollup So It Can Handle Growing Proof Throughput Without Re-Architecting?

Here’s the deal: if you want to find yourself in a bit of a bind with a rollup, just cling tightly to your current beliefs about provers, DA, and sequencers. It’s surprisingly easy to get trapped that way! This guide combines some great practices from 2025, based on the latest Ethereum upgrades, Layer 2 proof/DA markets, and actual real-world events. With these tips, you could potentially increase your proof throughput by 10 to 100 times down the line, and the best part is, you won’t need to make major changes to your code. (blog.ethereum.org).


Why “future‑proofing” your rollup’s proof throughput is different in 2025

This year has definitely been a wild ride with two major shake-ups that really changed the game for us:

Ethereum launched PeerDAS with the Fusaka fork on December 3, 2025. Hey there! Just a heads up: more Blob Parameter Only (BPO) forks are on the horizon! They're set to increase the blob target/max from 6/9 to 10/15 on December 9. And it doesn’t stop there--by December 2025, we're looking at an even bigger boost to 14/21. Exciting times ahead! So, the way blob base fees work and the addition of new EVM opcodes--like CLZ--really affects how much it costs for verifiers. If your rollup isn’t able to smoothly manage more blobspace and verify things at a lower cost while being quicker, you're really missing out on a lot of potential capacity and savings. (blog.ethereum.org).

The system for handling fault and validity proofs has come a long way and really evolved in a lot of different directions. Hey there! So, guess what? OP Stack fault proofs have officially hit “Stage 1” on the OP Mainnet! Pretty exciting stuff, right? Plus, Arbitrum has just launched BoLD, which means you can now validate without needing permission. Things are definitely heating up in the world of blockchain! On top of that, you've got these GPU-first zkVM provers like SP1 Turbo and Boojum making waves. And don't forget about external proof markets like Bonsai, Kalypso, and ProverNet. They really help you amp up your proving capacity without needing to push your own limits. Pretty handy, right? These days, setting up smooth connections between provers, data availability (DA), and sequencers isn't just a nice-to-have--it's essential. (theblock.co).

Here’s a quick peek at the game plan we use at 7Block Labs to avoid a complete overhaul when our proof throughput goes through the roof by 10 times.


1) Treat Ethereum’s blobspace like an elastic utility, not a fixed quota

  • Brace yourself for some “blob parameter churn.” So, after PeerDAS, Ethereum's cooking up some smaller BPO forks that are just going to adjust a few blob parameters--like the target, max, and update fraction. These are totally separate from the usual hard forks we’re used to. Pretty interesting, right? Your DA writer and fee estimator should be smart enough to read the chain config and make adjustments on their own, without needing to redeploy. Don't forget to set up a config map that’s organized by chain ID and “effective at slot.” This will help keep those updates running smoothly! (blog.ethereum.org).
  • Get a solid grip on retention the right way. Hey, just a heads up--blobs usually get pruned after roughly 18 days. So, don’t count on them hanging around indefinitely from L1! Make sure to remember your own data retrieval and caching limits. It’s also a good idea to create reconstructors that can handle any hiccups smoothly. (datawallet.com).

Hey, just a quick reminder about the whole fee market situation--it's pretty complex! Blob gas has developed its own base fee vibe now, so it's no longer competing with execution gas. Your batcher really needs to be savvy when it comes to pricing across blob lanes. Plus, it should know when it's a good idea to bring in an Alt-DA provider if the situation calls for it (take a peek at Section 3 for more details). (datawallet.com).

Once you've got the Fusaka update, it's time to adjust your batcher's "blob budget." Do this in two steps that align with the BPO schedule. Once you've wrapped that up, feel free to bump up the batch size or tweak the number of L2 blocks per proof as needed.
Just a friendly reminder: make sure you've got a backup plan in place to lower your batch sizes if the blob base fee takes an unexpected jump. It’s always good to be prepared! (blog.ethereum.org).


2) Implement a PeerDAS‑ready data path now (even if you won’t use it day one)

PeerDAS is really changing the game when it comes to keeping blobs available. Now, instead of downloading those big blobs, block producers and peers are just sampling these erasure-coded "cells." It’s a more efficient way to handle data! So, basically, clients need to check those "cell KZG proofs," and if you're sending something, you’ll need to add these proofs in your typed blob transaction wrappers. Don't forget to add this to your blob transaction builder and light client right away! Take a look at the details right here: (eips.ethereum.org).

Key Engineering Notes:

How about thinking about adding a “cell proof” library that sits behind an interface? It could really streamline things! This approach lets you change up your implementations without having to fiddle around with the batcher. Pretty convenient, right? Just a quick reminder to double-check everything according to the cell proof semantics of EIP-7594 and the subnet mapping, alright? Take a look at this: eips.ethereum.org. It's got some interesting info!

Hey there! Just a quick reminder to refresh your monitoring. Make sure you’re checking out the sampled columns per slot and keeping tabs on the reconstruction margin--try to aim for at least 50% of the columns, okay? Make sure to set up alerts so you'll know when your sampling success starts getting close to those limits. If you want to dive deeper into the details, check this out: eips.ethereum.org. You'll find all the info you need!


3) Keep DA fungible: wire Alt‑DA once, not per provider

If you're looking into Ethereum blobspace, it’s definitely a solid choice. However, keep in mind that things like costs, latency issues, or how you handle unexpected incidents might lead you to think about having a backup Data Availability (DA) solution just in case. It's always smart to have a Plan B! All you need to do is set up one Alt-DA adapter interface, and you can keep your provider-specific clients as plugins. It’s pretty straightforward!

  • The OP Stack Alt-DA offers a standard data availability (DA) server interface, plus you can rely on Celestia to run a reference op-alt-da provider. This should definitely be your go-to contract! Check it out here.

Arbitrum Nitro is super flexible! It comes ready to go with support for Rollup (L1), AnyTrust (which is DAC-based), and Celestia DA right from the get-go. If you're on the hunt for super low fees but still want a bit of trustworthiness, definitely check out their DACert semantics. You might find exactly what you need there! If you want to dive deeper into the details, just click here. You'll find plenty of useful info!

NEAR DA has a Blob Store contract and a light client, which is pretty cool. Plus, you'll find some great integrations laid out in the docs for OP, Orbit, and CDK. You can think of this as just another Alt-DA backend. Just a heads up, though - the retention windows are pretty clear-cut. You’ve got about 60 hours before you’ll need to start counting on archival. If you’re looking for more details, you can check it out here.

Operational Guardrails

Just remember to regularly commit to L1 and include a clear pointer--like a commitment--that you can easily check against the Alt‑DA dataset. It's really crucial for your bridge's fraud and validity checks to have a backup plan. If the committee or provider decides to withhold some data, there should be a way to revert to using the full dataset. This ensures you don't miss out on important information! If you’re looking for more info, definitely check out the Arbitrum docs. They’ve got some great resources there!

  • Make sure you have a backup plan in place for switching DA destinations. It’s better to handle it through configuration instead of diving into the code every time. This becomes really important when blob fees start to shoot up or if the service provider’s SLAs begin to falter.

Heads Up on the Market: Just a quick reminder to stay cautious--those shared sequencer and Alt-DA projects can disappear in the blink of an eye. Take a look at Astria for example. Even though it had its mainnet up and running along with RaaS, it still got phased out by 2025. To avoid getting stuck with a vendor, try viewing each data analysis or sequence as just one piece of the puzzle that you can choose to use or not. (astria.org).


4) Decouple proving with a “prover fabric” that can burst capacity

The future of throughput really depends on three main things: larger batches, faster provers, and parallel proof markets. We need to come up with a “prover fabric” abstraction that has these key qualities:

  • One API, multiple backends:
  • Feel free to connect to a local GPU farm using your favorite setup, whether that's SP1 GPU or Boojum-CUDA. Take a look at this: (succinct.xyz). You won't want to miss it! If you're feeling overwhelmed or just want to stay on top of your service level agreements (SLAs), managed services are there to lend a hand. Take RISC Zero Bonsai, for instance - they aim for a solid 99% uptime. It’s all about making your life easier! 9% uptime). Learn more at (risc0.com).
  • If you're looking to boost your capacity when things get busy, there are some great marketplace connectors out there. Check out options like Marlin Kalypso and Brevis ProverNet. Just be sure to pay attention to the staking and slashing details! More info at (research.marlin.org).
  • The job specs are straightforward and easy to follow: You'll see info like the circuit or version, how many cycles to expect, the maximum witness bytes allowed, the target deadline, the redundancy factor, and the bid caps. This clear setup lets you run reverse auctions for proofs outside of your main process without having to make any changes to your orchestration.
  • Recursive aggregation is flexible, not set in stone: You have the option to tweak the fan-in, recursion depth, and tree arity. This means you can easily adjust your batch tree to get the most out of whatever GPUs you're planning to use next year. One of the biggest perks we notice with Plonky2-style recursion and similar methods is pretty significant. If you want to explore this further, check it out at polygon.technology. You’ll find some really interesting insights there!

Capacity Planning Checklist:

Make sure to keep an eye on "proof seconds per block" and "proofs in queue." "This information is super important for your autoscaler, especially when the queue depth exceeds N blocks or if you notice your P95 latency starting to climb above your finality SLO. Keep an eye on these metrics--they can really make a difference!"

  • Don’t shy away from dual-sourcing! When things get intense, like during network upgrades or those crazy peak traffic times, it’s a smart move to send the same batch through two different providers. Trust me, it’s worth it for the extra peace of mind! I get that it might be a little pricier, but honestly, it's worth it to avoid a liveness incident down the road. Trust me, you'll be glad you did!

Market Reality

Nowadays, modern GPU provers are way more affordable per transaction than the older zkVMs. So, here’s the scoop from SP1: if you’re using commodity GPU instances, you can really slash those proving costs down to just a tiny fraction of a cent for a typical Ethereum-sized transaction. Pretty impressive, right? The idea is to take advantage of the extra capacity we have at a lower cost instead of trying to manage every single bit of it when things get busy. Take a look for all the details! You can find everything you need over here: succinct.xyz.


5) Make your proof frequency elastic (without changing your state model)

When everything’s going smoothly, a rollup usually sends out a bunch of L2 blocks along with each proof. When we notice those high blob fees or a lot of bridge pressure, it’s usually a good idea to “go thinner.” What I mean by that is to start issuing proofs more often, but with smaller witnesses. It keeps things running smoother!

Alright, here's a plan: let’s create a “finality band.” When fees are nice and low, try to keep the validity time for zk rollups between 5 to 8 minutes. But hey, if blob prices shoot up, it might be smart to shift that wait time to 15 to 20 minutes and go for bigger batches. Sound good? Hey, no worries! This is just a simple config toggle--no need to dive into the code at all.

If you're diving into optimistic rollups, definitely lean towards using modern fault-proof systems and keep validation open for everyone. This way, you can boost your throughput while still keeping security in check. Hey there! Just a quick update: the OP Stack’s fault proofs are up and running now, and BoLD has done away with validator allowlists on Arbitrum. Exciting times ahead! Just a quick tip: when you bump up that L2 gas limit, make sure your tools can keep up with those dispute systems and handle faster state outputs. You don't want any slowdowns! (theblock.co).

Make sure to stay updated on the latest Cannon/FP-VM developments, especially with things like MIPS-64 and multithreading. This is particularly important if you're using OP and looking to increase your block gas limit. These updates can really affect how much memory you have and your ability to manage larger chunks of data. (gov.optimism.io).


6) Budget for verifier gas reductions from EVM changes

Fusaka has launched some really important EIPs for verifiers! These updates are definitely worth checking out.

  • EIP‑7939 (CLZ opcode): Okay, this is a total game changer when it comes to hashing and field arithmetic routines! You should definitely consider refactoring the verifier bytecode to really make the most of it. Trust me, it’ll be worth it!
  • EIP‑7918 (blob base fee linked to execution cost): This is really handy for tweaking your fee predictor in both the execution and blob markets. It gives you a better grip on how fees are set, making everything a bit smoother! Hey there! Just a quick heads-up about a couple of updates: EIP‑7825 is bringing in a cap on transaction gas limits, and there are also some changes to the ModExp gas calculations. These updates are really meant to protect Layer 1 from some of those tricky transactions we've seen popping up. Plus, they could also shake things up a bit for how verifiers handle precompiles.

2026 Sprint Plan

Okay, let's sketch out a plan for a sprint in 2026. Our main goal will be to rework and enhance our verifier contracts after we really understand how clients are behaving and how the BPO forks are affecting blob targets. I think this approach will set us up for success!

Alright, let’s dive into what we need to tackle:

  1. Confirm Client Behavior Alright, before we jump into any coding, let's make sure we check out how our clients are doing. It's important to stay in the loop! Basically, it’s all about keeping an eye on how they interact and making sure everything's working like it should.

Assess BPO Forks’ Impact. We definitely need to take a closer look at how the BPO forks are impacting our blob targets. This step is super important for helping us make smart choices about our upgrades.

  1. Recompile Verifier Contracts
  • Once we feel good about the data we've collected, we can really dive in and start reworking our verifier contracts to make sure they match up with what we've discovered.
  1. Testing Phase
  • Make sure to test everything really well! We definitely don’t want to put out anything that hasn't been checked thoroughly. This involves running unit tests and maybe even rolling out a testnet so we can gather some real-world feedback.
  1. Deployment Once we've wrapped up testing and ensured everything's stable, we'll be all set to roll out the new verifier contracts. Let's make sure we share the updates clearly with everyone involved. It’s important that everyone is on the same page!
  2. Post-Deployment Monitoring Once everything's up and running, we'll keep a close eye on the system to make sure everything's working the way it should. If you spot anything unusual, be sure to tackle it right away.

If you're looking for more details and updates about our roadmap, be sure to swing by blog.ethereum.org. It's a great place to get the scoop!

Let's keep things flexible and be ready to switch things up as we learn more!


7) Keep sequencing policy pluggable (MEV changes quickly)

If you're handling your own sequencer, just remember to treat your ordering policy like a module. It’s a key piece of the puzzle!

So, check this out: Arbitrum's Timeboost is now up and running on One/Nova, and it’s a pretty neat way to transform those latency races into timed auctions that include an “express lane.” Pretty innovative, right? "Regardless of which path you decide to take, it's definitely worthwhile to think about your 'ordering policy.' This way, you can smoothly switch between options like first-come-first-served, auctions, or shared-sequencers whenever you need to." If you're curious and want to dive deeper into the details, just click here!

So, the results from Timeboost fees are kind of all over the place. We're talking about roughly $2 million in fees racked up over just three months in 2025. Plus, some researchers have raised a few eyebrows about issues like centralization and the risk of spam. It's a bit of a double-edged sword, really! It's a good idea to always have a kill switch ready to go and steer clear of hardcoding ordering policies directly into clients. If you're looking to really dig into the analysis, check it out here. It's packed with insights you won't want to miss!

Hey there! If you're considering playing around with shared sequencers, just a quick tip: it's a good idea to keep your bridging and DA separate from the sequencer dependency. It’ll help keep things running smoothly! Just take a look at what went down with Astria shutting down in 2025. It's a solid wake-up call to make sure you’ve got those backup exit strategies ready to go. If you want to dive deeper into that story, just click here. It’s got all the juicy details!


8) Data availability strategy: design for graceful failover

  • Main Focus: We're jumping into Ethereum blobs using PeerDAS. Let’s adjust those batch sizes based on the current active blob target and stay alert for any bumps in BPO that might pop up. Feel free to dive into the details right here. You won’t want to miss it!
  • Secondary: You’ve got some alternative data availability providers out there too, like Celestia, NEAR DA, or EigenDA, all working behind that same DA interface. It’s definitely smart to have bindings set up with at least a couple of these providers and to make sure you’ve tested everything out in canary mode. Trust me, it’ll save you headaches down the road! If you're looking for more details, you can check it out here. It’s a great resource!
  • Risk Transparency: If you're diving into AnyTrust/DAC or using a restaked DA like EigenDA, it's super important to keep track of any possible censorship or slashing limits. Make sure to document these things! Check out L2BEAT’s EigenDA risk pages--they’re a great resource! Just a heads up, though: without slashing, you really want to assume that there's an honest majority from an economic perspective, not just from a cryptographic view. It's an important distinction to keep in mind! If you want to dive deeper into this, feel free to check it out here.

Note on EigenDA

When you look into EigenDA, you’ll notice that there's a fair bit of flashy marketing out there boasting some pretty amazing theoretical throughput figures. But if you check out the usage stats on L2BEAT, you'll see that the numbers are actually quite a bit lower. It really boils down to how much demand there is from the posters. You know, it’s really smart to focus on your actual needs when designing, instead of just jumping on whatever’s trending in the headlines. Take a look at this link: l2beat.com. It's pretty interesting!


9) Observability that predicts when you’ll fall behind

Instrument three layers:

1. Layer One: This is where we get our basic instruments all set up. Hey, don’t forget to gather all your must-have tools! This layer is super important for kicking things off.

2. Layer Two: Alright, let’s get into it! This part is all about polishing our current data and making sure it's spot on. It's really about adding those little details that give your results that special sparkle.

3. Layer Three: And here’s where we bring it all together with some cool advanced instrumentation techniques. In this part, we’re going to dive into some cool strategies that will take your analysis to the next level and help you uncover some awesome insights.

  • Prover Fabric: Make sure to keep an eye on the queue depth, along with the P50 and P95 proof latency for each circuit. It’s also important to check the GPU memory pressure and track the fail/timeout rates for each provider. And don’t forget to be on the lookout for those “dual-source mismatch” alerts! Hey there! Take a moment to compare the blob base fee with the execution base fee. It’s really interesting to see how well blobs are being utilized compared to the targets and maximums we set. Also, keep an eye on the success rates for PeerDAS sampling. And don’t overlook the Alt-DA write/read SLOs along with any error codes that pop up. They could give us some valuable insights!
  • Sequencer: Check in on how the ordering policy is doing. Keep an eye on things like how well the auction rounds are working, if there are any holdups in the mempool, and whether there are any hints that someone might be taking advantage of the express lane. Make sure to keep an eye on the L2 gas limit headroom to make sure everything’s running without a hitch.

Make sure to run weekly “brownout” drills! Basically, you’ll want to send about 10-20% of your batches over to the backup prover and your Alt-DA path while everything’s still in production. It helps keep things smooth and ready for any surprises! This way, when it’s time to switch over, everything goes smoothly rather than turning into a big mess.


10) A minimal “future‑proof” architecture you can ship this quarter

So, we’ve got this setup with a batcher and a DA multiplexer. Just picture Ethereum blobs as the star of the show, while there's an alternative DA plugin ready to jump in if needed.

  • We’ve got a blob transaction builder now that’s in the loop with PeerDAS. It even handles cell KZG proofs like a champ!
  • Prover fabric that comes with:
  • You’ve got a local GPU pool, whether it’s something like SP1/Boojum or even your own ZK setup.
  • There’s a managed service option called Bonsai.
  • You'll need to choose one marketplace connector, either Kalypso or ProverNet.
  • There’s a recursion layer that’s set up with a config-driven fan-in and fan-out arrangement. We’re checking out contracts that are compiled for the Fusaka opcodes (CLZ). The plan is to upgrade once we gather those gas measurements from the ecosystem after Fusaka goes live.
  • Our sequencer comes with a handy ordering policy interface. We kick things off with FCFS as the default, but if you're looking for a little extra speed, there's also an optional Timeboost module you can add on. So, the service level objectives (SLOs) are connected to things like how deep the queue gets and the different tiers of blob pricing.

Each part changes depending on its setup rather than just the code, which means you can handle: (1) the bump in the BPO blob, (2) a speedy new prover, (3) a hiccup from a DA provider, and (4) a spike in MEV traffic.


Worked example: raising throughput 8-10x without changing the core

Right now, a zk rollup is processing verifications about every six minutes. Each time it verifies, it posts three blobs. So, what’s the plan? We’re targeting to boost our output by 8 to 10 times in the first half of 2026. Exciting times ahead!

Once Fusaka BPO2 gets the blob target up to 14, let's go ahead and boost the batch size. We're shooting for around 8 to 10 blobs per proof, while keeping an eye on those per-block limits, of course. We're going to monitor the blob base fee for the next couple of weeks before we think about bumping it up any more. (blog.ethereum.org).

  1. Let’s change the recursion from 4-ary to 8-ary. We’re really just swapping out the config, so no need to get tangled up with the circuits. It's a pretty straightforward switch! This simple change will slash the aggregation depth by 50% and really help speed up the proof wall time. (polygon.technology).

Alright, so here’s the plan: we’re thinking of sending some of our external prover overflow out. We’ll allocate 25% of the leaf proofs to Bonsai (that’s our SLA) and then another 15% to a marketplace called Kalypso, but we’ll set a maximum bid for that. Sound good? We’re going to make sure the final aggregation stays local, though! (risc0.com).

  1. Once you've done the Fusaka update, go ahead and redeploy the verifier. Make sure to use those CLZ-optimized bit routines; they should help you save about ~X% on gas. Just remember to keep an eye on it and measure the impact on your code path! (blog.ethereum.org).
  2. Think of Alt-DA as a backup plan. If the blob base fee shoots up past a certain point, we can send batch data over to Alt-DA and drop those commitments on L1. We'll switch back once those blob fees settle down to normal. (docs.optimism.io).

This method just makes some small adjustments to the configuration and the verifier bytecode. It doesn't change the state model or the settlement logic at all.


Emerging practices worth piloting (carefully)

  • Proof markets with economic security: If you use restaked or staked provers, like Marlin/Symbiotic and Brevis (which is set to have some slashing coming up), you can actually reduce counterparty risk. This approach is often safer than dealing with regular spot markets. Starting off with some low-risk tasks is definitely the way to go. Check it out here.
  • GPU-native libraries: It makes sense to stick with CUDA-accelerated MSM/NTT libraries for all your circuits. That way, you can keep everything portable and easy to work with, no matter where you’re deploying it. For example, you've got Boojum-CUDA and ICICLE-class libraries that a bunch of different stacks are already leveraging. If you want to dive deeper and get all the details, check it out on GitHub. You’ll find everything you need there!
  • Making money with sequencers while keeping users safe: Timeboost shows that it's totally possible to earn revenue from orders without having to lean on public mempools. Hey, just a heads up to make sure you set some usage limits and stay on top of any centralization or spam problems that could come up. It’s always good to keep an eye out! If you’re looking for more details, you can check it out here. It’s got all the info you need!

Pitfalls we still see in 2025 audits

Hey, if we stick to that “6 blobs max” rule in batchers long after Dencun, we might really miss out on using our capacity to the fullest, especially after the Fusaka update. Let's check the chain config at each slot to keep everything in the loop. (blog.ethereum.org). Sticking with just one vendor for your DA/seq can really tie your hands and make it tough to switch things up later. The Astria shutdown really took a lot of teams by surprise because they hadn’t put much thought into backup plans. (theblock.co). Instead of having to redeploy the monolithic prover every time we want to adjust the recursion arity or change the prover backend, how about we make those into runtime flags instead? Sounds way easier, right? It’s probably not the best move to keep putting off those PeerDAS cell proofs for “later.” The spec puts the responsibility on senders and producers, so let’s go ahead and get this integrated now. That way, we can avoid a crazy scramble later on! (eips.ethereum.org).


Executive checklist (TL;DR)

  • Architecture
  • DA: So, we're really putting the spotlight on Ethereum blobs as our key feature right now. Plus, we've got this awesome Alt-DA plugin that we've tested out and it worked great for the cutover. Take a look at it here: docs.optimism.io. You'll find some interesting stuff!
  • Prover: Picture this as a mix of local setups, managed services, and a marketplace, all packed into one API. On top of that, we offer recursive fan-in as something you can tweak to fit your needs! If you're looking for more info, check out the details over at risc0.com. You’ll find everything you need!
  • Sequencer: We've got this cool ordering policy module that can handle FCFS, Timeboost, and some shared methods. It’s more than just a simple fork, so we’re keeping things interesting with our approach. Dive deeper here: (docs.arbitrum.io).
  • Readiness
  • We've made it super easy by adding PeerDAS cell proofs directly into your blob transaction path. If you want to dive deeper into this topic, check it out here: (eips.ethereum.org). It's a great resource! Just a heads up, we've finished redoing the verifiers for Fusaka, and that includes CLZ and a few others. Right now, we're keeping an eye on gas usage as we gear up for the migration. Exciting times ahead! Hey, make sure to take a look at the newest updates over at the Ethereum blog! You can find all the details here: blog.ethereum.org. It’s definitely worth a read!
  • Operations We're working on setting our Service Level Objectives (SLOs) to keep an eye on the proof queue depth and latency. We'll also be looking at those blob fee bands. Just a heads-up: you can expect to see weekly failovers to Alt-DA and some external provers as we go along. We’ve got a budget lined up for the BPO fork cadence, and right now, we’re experimenting with higher blob targets in our canary environment. Check out the latest news right here: (blog.ethereum.org). You won’t want to miss it!

If you follow this playbook, you’re going to see a nice uptick in your proof throughput as the ecosystem changes. We're talking about things like PeerDAS/BPOs, faster provers, and new DA markets. And the best part? You’ll be able to maintain the solid foundation of your rollup while you’re at it.


References and further reading

  • Ethereum Foundation: Hey, don’t miss the latest announcement about the Fusaka mainnet! It’s packed with info about PeerDAS, the BPO schedule, and some exciting upcoming gas/UX EIPs. (blog.ethereum.org).
  • EIP‑7594: Check out the details on PeerDAS! It’s all about those cell KZG proofs and one-dimensional erasure coding. (eips.ethereum.org).
  • EIP-4844: Check out the latest info on blob pricing and how the fee system works - it’s all about keeping those blobs around! (datawallet.com).
  • OP Stack Fault Proofs: Just wanted to share the newest update from OP Labs on Stage 1 of the fault-proof system. (blog.oplabs.co).
  • Arbitrum BoLD: Hey everyone, I've got some awesome news! Get ready because permissionless validation is coming your way in 2025. How cool is that? (theblock.co).
  • OP Alt-DA: Here’s a quick rundown on OP Alt-DA and some info about the Celestia op-alt-da provider. (docs.optimism.io).
  • Arbitrum AnyTrust/DACerts: Here’s a quick rundown of what AnyTrust really is and why it’s so useful. (docs.arbitrum.io).
  • NEAR DA Docs: Take some time to dive into the Blob Store, check out the light client options, and explore the different retention strategies. It's a good way to get comfortable with the system! (docs.near.org).
  • Succinct SP1 GPU Prover: Be sure to take a look at the benchmarks and cost claims for the new SP1 GPU prover network. It’s worth checking out! (succinct.xyz).
  • RISC Zero Bonsai Proving Service: Check out the details on the Service Level Agreement (SLA) and find out where to get help if you need it for this proving service. (risc0.com).
  • Timeboost Docs: Check out the documentation and fee results, plus some important insights. (docs.arbitrum.io).

At 7Block Labs, we’re all about helping startups and businesses turn their ideas into real, working solutions. Whether you’re just starting out or looking to scale, we’ve got your back in building the architecture you need to bring your vision to life. Hey there! If you’re thinking about getting a readiness review before your next blob or throughput raise, just reach out to us. We’re here to help!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.