ByAUJay
Best practices for future-proofing rollup proof throughput: Modular Prover Architectures
Unlocking Rollup Scalability: The Role of Proof Generation
Proof generation is kind of like the secret ingredient that makes rollup scalability work. This guide really digs into the latest research, real-world benchmarks, and some handy strategies to help decision-makers whip up high-throughput, low-latency proofs right now. Plus, it keeps an eye on how proving technology is changing over time.
TL;DR (description)
So, let’s talk about this cool modular prover architecture. By breaking things down into separate parts--like the witness, trace, proving, recursion, and verification--we can really amp up the proof throughput for rollups. We’re talking about a potential boost of anywhere from 10 to 100 times! Plus, this kind of setup keeps us adaptable, which is super important given how quickly ZK and fraud-proof systems, not to mention hardware, are evolving. Pretty exciting stuff!
Hey there! We’ve got some really exciting new proof networks to share - like Succinct and ZkCloud. Plus, we’re rolling out verification layers such as Aligned, and there’s also cross-stack interoperability with AggLayer. Everything's all set and ready to go for production!
These tools can really help lower latency and cut costs, plus they minimize the risks of being too dependent on just one vendor.
(theblock.co).
Why proof throughput is your real bottleneck in 2025
- Succinct’s SP1 “Hypercube” zkVM has totally exceeded expectations by proving 93% of Ethereum blocks in less than 12 seconds using a powerhouse cluster of 200 RTX 4090s. Pretty impressive, right? They're aiming for real-time L1 block proving, and they think they can pull it all together for under $100k if they make a few adjustments to the hardware. Hey, have you checked this out? It’s pretty groundbreaking for reducing latency in L1 and rollup proving! You can find more details over at theblock.co. It’s definitely worth a read! Wow, the hash-throughput records just keep getting smashed! StarkWare's Stwo has hit an impressive milestone, reaching over 500,000 hashes per second--all with a standard quad-core CPU. How cool is that? In the meantime, Polygon's open-source Plonky3 has really taken off, hitting over 2 million hashes per second on an M3 Max. That's pretty impressive! You won't believe this--Polyhedra’s Expander just pumped out 2! You can hit around 16 million Poseidon hashes per second with a Ryzen 7950X3D, which is pretty impressive! And if you really want to push the limits, you can scale that up to about 16 million H/s on servers with 256 cores. That's some serious power! These amazing achievements are seriously reducing proof latency and making it easier to dive into deeper recursion and aggregation without breaking the bank. (starknet.io). Hey there! Exciting news--Optimism's OP Stack has officially reached Stage 1! This milestone includes some cool features like permissionless fault proofs and a modular roadmap for what they're calling a “multi-proof” system. Pretty neat, right? Cannon is live right now, and they’ve got Asterisc, Kona, and ZK all set to join later. Even optimistic rollups are jumping in on the fun, gearing up for some cool pluggable and redundant proof systems. Hey there! If you're diving into building your architecture, it's time to get excited about mixing and matching provers. Check out this link for all the details: optimism.io. Happy building!
So here’s the scoop: Throughput isn’t just about whipping through numbers at lightning speed. It’s also about how we set up our systems, the way we run them, and the vendors we decide to go with. Imagine the prover as a super flexible service that can work with multiple backends, rather than some giant, clunky monolith that's hard to deal with.
The modular prover architecture: the five planes
Designing Five Separable Planes with Typed Interfaces
When you're getting into the design of five separate planes, it's super important to ensure they have clear, well-defined interfaces. Let me give you a quick rundown on how you might tackle this:
1. Define the Planes
First off, let's take a moment to figure out what each plane actually stands for. Each role should come with its own set of unique responsibilities, so take a moment to consider what fits best for your project. Whether you're working on software architecture, game design, or something entirely different, just think about what makes the most sense for your specific situation.
2. Establish Typed Interfaces
For every plane, let’s set up some interfaces that really outline how they’ll communicate with one another. This will definitely help you communicate more smoothly between planes, and it'll also make the entire system easier to keep up with. So, if you're looking to set that up, here's a simple way to go about it:
- Plane A: This one takes care of all the user input.
- Interface:
IUserInput
- Interface:
- Plane B: This one’s all about handling data processing.
- Interface:
IDataProcessor
- Interface:
- Plane C: Takes care of storage solutions.
- Interface:
IStorageManager
- Interface:
- Plane D: Handles all the networking stuff.
- Interface:
INetworkHandler
- Interface:
- Plane E: This is the part that takes care of rendering the output.
- Interface:
IRenderOutput
- Interface:
3. Create the Connections
Make sure the links between these planes are clear and easy to understand. Every interface should clearly lay out the methods and properties that other components will need to work with. Here's a quick code example for you:
public interface IUserInput
{
void GetInput();
}
public interface IDataProcessor
{
void ProcessData(string input);
}
public interface IStorageManager
{
void SaveData(string data);
}
public interface INetworkHandler
{
void SendData(string data);
}
public interface IRenderOutput
{
void Display(string data);
}
4. Implementation Strategy
Alright, let's dive into putting each plane into action! It might be a good idea to take a layered approach. This way, each layer only knows what it needs to, keeping things simple and organized. This helps keep your system neat and organized.
5. Testing the Interfaces
Once you've got everything set up, make sure to give your interfaces a good test drive. You want to be sure everything's running smoothly! This will help us spot any problems and ensure that all the planes can talk to each other without a hitch.
Summary
When you concentrate on separable planes that have well-defined interfaces, you're really building a solid design. This approach makes things a lot easier to handle and helps it grow smoothly over time. Make sure your planes are unique but still linked together nicely, and you'll set yourself up for success. Happy designing!.
- Witness Plane
- Quickly gather and normalize execution data, including things like EVM traces and traces from RISC‑V, Cairo, or Wasmtime.
Let’s simplify those big circuits by breaking them down into smaller subcircuits that help us keep track of memory. Also, it’s a good idea to overlap the witness and proof phases.
There's some really interesting research out there, like what Yoimiya is doing, that looks into circuit partitioning and pipeline scheduling. They’re working on ways to align witness and proof timings across different resources, and it's all about making things run more smoothly and efficiently.
Give it a look here: (arxiv.org).
2) Trace Plane
Alright, let's change those traces into a format that's kind of a happy medium (IR) that can easily work with any prover. Check out some vendor IRs, like the Hypercube IR from Cysic. They really show how using a hardware-friendly set of operations can improve performance for things like MSM, NTT, and Keccak on both GPUs and ASICs. It's pretty impressive to see how these optimizations can make a difference! When we use or replicate an IR, it can really help smooth out the bumps when we're switching between backends. If you want to dive deeper into the details, just click here. Happy exploring!
3) Prover Plane
- Multiple backends: We’ve got a bunch of choices for you! There are SNARKs (like Groth16 or those PLONK-style ones), STARKs, and even some cool folding-based recursion options--definitely take a look at the Nova/SuperNova lineup. Plus, we can't forget about zkVMs (like SP1 and RISC Zero) and fraud-proof VMs (just think MIPS and RISC-V). It’s a pretty exciting range! So, let’s make sure we put each of these behind a gRPC/IPC boundary, right? It’s also a good idea to include features for capability discovery, like what they’re capable of, their maximum circuit sizes, costs, latencies, and where their fault domains lie.
- Stay updated on security notes: Make sure you’re keeping tabs on those security updates! Recently, there were some fixes for the Nova-style folding, especially regarding the cycle-of-curves variants. It's worth checking out! It's smart to keep an eye on and pin down versions for the patches that come with official proofs. If you’re looking for more information, check out this paper: eprint.iacr.org. It’s got all the details you might need!
4) Recursion/Aggregation Plane
Hey, have you taken a look at those super fast recursive systems like Plonky3, Expander, and GoldiBear wrappers? They're really handy for packing a bunch of subproofs into a tidy little bundle. Definitely worth checking out! Hey, just a heads up! Telos managed to wrap Polygon Hermez proofs with their Plonky2/GoldiBear wrapper super quickly--like in no time at all. Pretty impressive, right? It takes about 9 seconds on a regular consumer CPU. What we're aiming for is to collect those multi-minute batch proofs in a matter of seconds. (telos.net).
5) Verification/Settlement Plane
When we need on-chain verification, we can easily offload that task to a decentralized verification layer. This really helps cut down on those annoying Layer 1 gas fees. Hey there! So, Aligned Layer just shared some exciting news - a lot of proof types are now seeing their verification costs plummet to less than 10% of what you'd usually expect to pay on Ethereum. Pretty cool, right? If you want to dive deeper into this, take a look at their blog post here: Aligned Layer Blog.
Hardware strategy that actually matches ZK workloads
- Heterogeneous accelerators: So, we’re working with GPUs that totally crush it when it comes to parallel tasks like NTT, MSM, and Keccak. Then we have FPGAs and ASICs that are really efficient with their fixed arithmetic stuff, and of course, our trusty CPUs are taking care of the control flow and witness transforms. It's a pretty solid team effort! Hey, have you heard about Ingonyama’s ICICLE? It’s got some really cool features like GPU-optimized Poseidon, MSM, and mixed-radix NTT. Definitely worth checking out! The latest v3 has even added a cool CPU backend for scheduling that works with pretty much any device. (ingonyama.com).
- New ASICs Alert: Cysic just launched a cool new lineup of ASICs! Meet the ZK-C1, ZK-Air, and ZK-Pro, all built with MSM/NTT in mind. They're saying that for specific kernels, they're seeing a pretty amazing efficiency boost--like, between 10 to 100 times better than GPUs! These ASICs are designed to be top-of-the-line tools, but just a heads-up--they have some limits when it comes to programmability. That’s why we're going to stick with the IR abstraction. (docs.cysic.xyz).
- Real-world clusters: So, SP1 ran some tests with 200×4090 GPUs, and guess what? It looks like “real-time” clusters might actually be feasible with standard GPUs! How cool is that? It's a good idea to use node groups that are aware of NVLink/PCIe topology. This can really help reduce data movement when you're working with MSM/NTT. (theblock.co).
Practical tip: Profile PCIe Copies
Hey, just a heads up about PCIe copies--it's worth paying attention to them! Some reports have found that a crazy 40-50% of the total end-to-end time can get bogged down in those host↔device transfers. This is especially true when you're doing witness generation on the CPU while the MSM/NTT tasks are being handled by the accelerators. So, definitely keep that in mind! To tackle this, consider moving the partial witness generation closer to the accelerator. Even better, why not see if you can shift more of the pipeline directly onto the device itself? If you're curious and want to dive deeper, you can find more info here. It's worth a look!
Multi‑proof optionality is the new north star
So, the OP Stack’s “multi-proof nirvana” is really cool because it’s all about running a bunch of different proof systems at the same time. Imagine it like this: you’ve got Cannon, Asterisc RISC-V, and Kona all hanging out together in Rust, and then ZK is set to join the party a bit later. It’s like a tech jam session! Your prover plane should be pretty clever and manage requests to different backends following these guidelines: for regular batches, it should always opt for the most cost-effective choice. But when it comes to those safety-critical checkpoints, it’s all about keeping things diverse. Feel free to take a look at it right here: (optimism.io).
- Check out these open implementations that are definitely worth watching:
- Asterisc: So, this is basically a RISC-V fraud-proof virtual machine that simulates an EVM step right within Solidity/Yul. Check this out: (github.com). You'll find some interesting stuff there!
- Kona: This one's a Rust version of the OP STF, and it's also used by OP-Succinct and Kailua when it comes to ZK or ZK-fraud proofs. Check it out right here: github.com.
- OP-Succinct: This one offers complete validity proofs for OP chains and boasts proving costs that can go as low as zero. 5-1. You won't believe it - transactions are totally free! But there's a catch: sometimes, the waiting time can stretch into minutes when you're working with clusters. More info here: (succinct.xyz).
- Kailua (Boundless/RISC Zero): Right now, they’re diving into ZK fraud-proofs, and they’re shooting for about an hour for finality with their validity mode. It’s specifically designed to operate Kona within the RISC Zero zkVM. Take a look at this: (github.com). You won’t want to miss it!
Design Pattern: Dual Provenance for Safety Windows
To make our safety windows even better, we're suggesting a new approach called a “dual provenance” design pattern. This means using a Groth16 or Plonk-style proof alongside a separate zkVM proof for the same State Transition Function (STF) during important time periods.
Key Components:
- Proofs: For better efficiency, let’s go with Groth16 or Plonkish proofs. And to keep things solid, we should also incorporate an independent zkVM proof for added robustness.
- Key Moments: Pay special attention to those timeframes when safety is super important for keeping the system running smoothly.
Policy for Withdrawals and Bridge Operations:
Let's set up a policy for dealing with L2 withdrawals or bridge operations that follows an AND/OR approach. Basically, we want to lay out clear guidelines that cover both scenarios, ensuring everything runs smoothly. This method strikes a nice balance between making sure users have a great experience while also keeping things safe.
- AND Policy: This one’s all about security. It needs both proofs to validate the operation, so you can rest easy knowing everything’s locked down tight!
- OR Policy: Let’s move forward with the operation as long as at least one proof checks out. This way, we can make things a lot smoother for our users!
By mixing these strategies together, we can build a solid and easy-to-use framework that really focuses on keeping things safe while also being user-friendly.
Outsource and diversify: proof networks and verification layers
- Decentralized prover networks: The Prover Network for Succinct is officially up and running, starting with SP1. It includes real-time proving targets and a marketplace where you can stake your tokens. Companies like Cysic really bring some impressive multi-node GPU and ASIC power to the table. You can totally use it to build on what you already have, or even make it your go-to setup. Take a look at this: (dune.com). You won't want to miss it!
- ZkCloud (formerly known as Gevulot): So, this is a cool all-in-one proving platform that offers a bunch of handy features. It includes containerized provers, plus fleets for both CPU and GPU, all backed by a Cosmos-based orchestration chain. Pretty impressive, right? They're saying you can save “up to 95%” on costs compared to traditional cloud solutions for ZK workloads. Plus, they’re making it super easy for operators to get started. Sounds like a good deal, right? It's a really convenient option that works well with various proof systems, like SP1 and R0VM, among others. Dive in at (zkcloud.com).
- Managed services: With RISC Zero’s Bonsai, you get super reliable parallel proving, boasting an impressive 99% reliability rate. 9% uptime SLAs. It’s definitely a smart idea to have this as a backup option in your broker layer. You never know when you might need it! Learn more at (risc0.com).
- Verification layers: The Aligned Layer, a cool feature of EigenLayer AVS, lets you verify proofs without breaking the bank, and then it shares the results on Ethereum or other Layer 2 solutions. You can take a look at how much you save on verification compared to L1 gas for your circuits. Got a bit of curiosity? Take a look at this: (blog.alignedlayer.com). You won't want to miss it!
- ZK state committees and coprocessing: So, Lagrange’s AVS is really cool because it focuses on running ZK light clients. It also handles creating ZK state proofs for optimistic rollups, and get this--there's no limit on the number of attesters involved! This is really handy for getting finality across different rollups or when you need to make on-chain data queries. Check out the full scoop at lagrange.dev! You'll find all the info you need there.
Procurement Tip:
Don't forget to ask for those pre-proof quotes and get the Service Level Objectives (SLOs) too! Things like p50 and p95 latency, failure rates, and requeue delays are super important. It's all about making sure you have a clear picture of what to expect!
If you’re diving into competitive bidding, why not think about doing it at the broker level? It’s a solid idea to set a cap on fees and latency for each batch, and then go ahead and grab the best bids from different providers. It’s a smart way to get the most bang for your buck!
Interop and aggregation: design for cross‑stack futures
- Polygon AggLayer: Awesome news, everyone! Pessimistic proofs are finally up and running! We just rolled out v0.
- They’ve introduced a new execution-proof mode! What this means is that non-CDK chains can join in and verify their own state. This is a big step towards achieving a seamless cross-chain user experience that clocks in under 10 seconds! If you're working on building a chain that needs to tap into cross-chain liquidity, don't forget to think about incorporating AggLayer-compatible proofs and how your bridges will work. It's a key part of the planning process! Take a look at this: (agglayer.dev).
- Plonky3’s role: This tech is seriously shaking things up! It's quickly becoming the main go-to for recursion in projects like SP1 and Valida, and it’s also going to give AggLayer's safety stack a nice boost. You can definitely rely on it for gathering evidence and ensuring it’s easy to transfer later on. If you're curious to learn more about this topic, check it out here: theblock.co. You'll find some really interesting insights!
Concrete throughput patterns we see working
- Recursively Dividing Things Up: Pipelined Style. Alright, so what you want to do is break the batch down into smaller chunks, like anywhere from 32 to 128 mini subcircuits. After that, just pass each of those little guys off to either GPU or ASIC workers. Easy peasy! After that, we kind of fold everything back together into one proof. Just a heads-up: it's important to keep a 1:1 ratio of witnesses to proofs in the pipeline. This way, the witness generation can keep up with the accelerators smoothly. We’ve been using Yoimiya-style decoupling in our experiments, and let me tell you, it’s made a huge difference! It’s really helped us use our resources more efficiently and has cut down on the time we spend proving things. Pretty cool, right? Check it out here!.
2) Dual-path backends
If you're diving into bytecode-equivalent zkEVM circuits, you can actually run your current SNARK or STARK path while also working with a zkVM proof on the STF (you might know it as SP1/Kona). Consider the zkVM path your trusty “safety belt” for those times when you’re upgrading circuits or launching high-risk releases. It’s there to keep everything secure while you navigate through the tricky parts! If you want to learn more about it, just click here. You'll find some great info!
3) Brokered Burst Capacity
Hey there! So, if you notice that the p95 latency is creeping past your SLO--like over 120 seconds--it's a good idea to think about autoscaling to either Succinct or ZkCloud. Just remember to set a ceiling for each proof while you're at it! Make sure to keep about 20-30% headroom available locally for any retries you might need. You can totally send the tail off to an outside source. Take a look at it here: (dune.com).
4) Off-L1 Verification and Co-Posting
First things first, make sure you’re checking those proofs with the Aligned Layer to save on costs. It’s a smart move! Once that’s all taken care of, go ahead and share the verification results along with a quick summary to L1. It's definitely a smart move to set up regular “full L1 verify” checkpoints. You could do this every N blocks or just make it a weekly thing. It helps keep trust levels in check, which is super important! If you want to dive deeper into it, just check it out here. Enjoy!
5) Hardware-aware scheduling
To make things easier, consider placing the witness transforms right alongside the accelerators. This should really help cut down on those annoying PCIe transfers! Oh, and when you're batching MSM/NTT, just keep an eye on the sizes you choose. You want to make sure they fit comfortably within the GPU memory so you don't run into any issues. Nobody wants to deal with that thrashing! If you're in a situation where every millisecond matters, it’s generally smarter to opt for a bunch of mid-tier GPUs, like the 4090s, rather than putting all your chips on just a couple of datacenter cards. This setup can really help you keep that latency down. We actually got some pretty impressive real-time SP1 results by going about it that way. (theblock.co).
Case studies you can copy from (and what to extract)
- Starknet + Stwo: This smart combination comes with a super-efficient prover that can manage more than 500,000 hashes every second. Pretty impressive, right? This is a great plan for setting up CPU-first fallback paths, which really help keep things running smoothly when GPU resources are running low. Check it out here.
- Polygon Plonky3 + GoldiBear wrapper: These awesome, fast recursion wrappers let you pack together hundreds, even thousands, of proofs in just a matter of seconds. It’s pretty impressive how quickly you can get things done! It’s a great way to save on L1 costs and reduce latency, all without the hassle of redesigning your circuits. If you're curious to learn more, check out the full scoop here. It's definitely worth a look!
- OP Stack multi-proof path: With this setup, you can configure your contracts and infrastructure in a way that lets you swap in and out fraud-proofs, zk-fraud-proofs, and validity proofs without causing chaos in the rest of your system. It's all about keeping things smooth and flexible! This is a solid starting point for understanding staged decentralization and proof diversity. Learn more here.
- OP-Succinct / Kailua / Kona: So, these clever blueprints are designed to really cut down the fraud window for OP chains. Instead of taking a whole week, they aim to bring it down to just a few minutes or maybe even hours, thanks to zkVMs. And the best part? They manage to do this without making too many changes to the rollup. Pretty neat, right? If you want to dive deeper into this topic, you can check it out here. Happy reading!
- Succinct Prover Network + Cysic: Check out a decentralized marketplace where you can easily find provers! Now that Cysic's in the game, we’ve got a boost in GPU and ASIC power, plus improved latency guarantees thanks to those auctions. It's definitely an upgrade! Discover the details here.
- Lagrange State Committees: So, if you're working on an optimistic rollup, you might want to think about incorporating ZK light clients. They can really help you tap into cross-chain access for your state, and having a big, secure group of attesters is definitely a plus! Find out more here.
- **AggLayer v0. 2/0. Hey there! So, this update is all about making sure that different chains can work together seamlessly, no matter where they come from. How cool is that? The pessimistic proof model is a great way to keep a shared bridge safe, even if one of the chains runs into trouble. Don't miss out on the latest news! You can check it out here.
SRE playbook for provers (what your ops team should enforce)
- SLOs and Budgets
Try to hit that p50 goal in under 60 seconds and p95 in under 180 seconds for every batch proof. Just remember to tweak those numbers a bit depending on the size of your block! - Stay on top of your error budget: try to keep it under zero. There’s about a 5% chance of job failures, and it’s even lower than that--so under 0, really. 1% verification reverts. Hey, just a heads up about those "tail taxes." You really need to keep an eye on those L1 congestion spikes and any delays with the verification layer. They can sneak up on you!
- Determinism and Auditability
- Make sure to lock in all the math libraries, like field operations and FFT, so we can ensure our builds are reproducible. Having determinism is really crucial for proving markets that involve slashing, like Succinct. (ainvest.com).
- Hot-swap upgrades
- Switch up your keys and parameters without any downtime! Try using blue/green deployment to roll out fresh circuit updates smoothly. Also, it's a good idea to test out new circuits alongside the old ones for at least one cycle before making them the default. This way, you can be sure everything is working perfectly!
- Checkpoint policy
We’re all about staying on top of things! Every week, we’re running a thorough on-chain verification. Plus, we’ve got daily updates for our verification layer and quick summaries for each block so you can stay in the loop. Just remember to tweak this according to how comfortable you are with taking risks and how much you're okay with spending on gas. (blog.alignedlayer.com). - Telemetry
- Keep track of metrics at every stage, such as witness queries per second (qps), MSM/NTT occupancy levels, PCIe input/output rates, GPU memory availability, recursion depth, and verification gas usage. Don't forget to check out those per-backend stats, like "effective $/proof" and "Joules/proof." They can really give you a better idea of how things are running! ”.
Security considerations you can’t ignore
- Folding systems and recursion: Don't forget to check in for any updates on Nova-style proofs! Hey, just a quick heads up! Vendors have been rolling out those soundness patches, which is great news. Also, theoretical bounds are still making waves in discussions. Make sure you stay updated and keep track of your commits--it’s super important! If you’re curious and want to dive deeper, take a look at this link: (eprint.iacr.org). It’s got some great info!
- Permissionless validation risks (optimistic): So, even if you’re using BoLD on Arbitrum or Stage-1 on OP, it’s a good idea to put some safeguards in place. Things like rate limits, bond sizing, and regular monitoring can really help keep those pesky griefers at bay! Hey, just a quick heads up! Arbitrum’s BoLD is looking at a resolution time of about 12 to 13 days. Just keep in mind there are a few caveats to consider, especially if you’re working with those permissionless setups. If you want to dig deeper, check it out here: theblock.co. Happy reading!
- Data retention: Just a heads-up! If you're using third-party Prover APIs such as ZKsync, keep in mind that your batch data could disappear after about 30 days. That's what some early reports are hinting at, so it's definitely something to watch out for. It’s really up to you to make sure your broker gets those inputs in on time. If not, you might want to consider keeping your own records just in case! More details here: (zksync.mirror.xyz).
A 90‑day implementation plan
Weeks 0-2: Architecture and Procurement
- Start setting up your minimal broker with two backends: your existing prover and SP1 via Succinct. Don't forget to set up a ProverJob proto! You'll want to include some important fields like circuit_id, target_latency, max_fee, and recursion_strategy. Just make sure you cover all those bases! If you’re curious to learn more, feel free to dive into the details here. It’s definitely worth a look!
First off, you’ll want to pick an Intermediate Representation (IR) that works for you. You could either go with Hypercube-style operations or take some inspiration from them. It really depends on what fits your needs best! So, don’t forget to link up those witness transforms with the IR you picked out! If you’re curious and want to dig a little deeper, check out hozk.io. They've got some great insights waiting for you!
Weeks 2-6: Pipeline and Recursion
Alright, let’s jump into the world of partitioning circuits! We’ll explore how to use recursion with either Plonky3 or Expander. So, what we’re aiming to do is track those p50 and p95 stats. We're targeting somewhere between 32 and 128 subcircuits, and we're hoping to crank out those wrapper proofs in about a second. If you’re curious to learn more about it, you can take a look at this link. Let’s make sure to put those witness transforms on the GPU nodes as well. It’ll really help reduce the PCIe overhead! Hey, just a quick reminder to make sure we validate for determinism while we’re on the topic!
Weeks 6-9: Verification and Interop
Hey, let’s make sure we integrate Aligned Layer so we can offload some of that verification work. And remember to set up those weekly on-chain verification checkpoints, too! If you want to dive deeper into it, you can find more info here. Enjoy! To create a seamless cross-chain experience for our users, we should definitely align our proof artifacts with what AggLayer needs. If you're looking for more info, check this out here. It's got all the details you need!
Weeks 9-12: Resilience and Growth
- How about we bring in a third backend, something like ZkCloud? It could really boost our capacity when we need that extra push. Let’s go ahead and set up the policy by focusing on price first for our regular tasks, and then we can switch gears to a diversity-first mindset when we hit those checkpoints. Sound good? Take a look at this: (zkcloud.com).
If you’re diving into the OP Stack, now’s a great moment to start exploring a “lite ZK” route, like OP-Succinct Lite or Kailua. This'll help us speed things up and reduce the time we spend on disputes, even if we don't have all the details nailed down right away. Once we finish that, we can dive into planning the transition to the validity mode. If you're looking for more info, check this out: blog.succinct.xyz. You’ll find all the details you need there!
What “good” looks like by Q1 next year
- Latency: We're aiming for a median latency of under 60 seconds, and we want the 95th percentile to be below 180 seconds for every batch.
- Cost: It turns out that typical batches usually cost less than zero dollars. It’s just a dollar for each transaction, and we can save on verification costs whenever it’s secure enough to do that. Take a look at this: (succinct.xyz). It’s definitely worth checking out!
- Diversity: We’re excited to have two distinct proof families in action right now--imagine a mix of circuit and zkVM. Plus, we’re rolling out a full on-chain verification every week! If you're looking for more info, just check out this link: optimism.io. You’ll find all the details you need there!
- Capacity: We can quickly boost our capacity to 2-5 times what we usually handle thanks to our prover networks, and we can do it in just a few minutes! Curious to learn more? Just swing by dune.com for all the details!
Final thought
When it comes to making sure your throughput is ready for the future, it’s not all about just grabbing the “fastest” prover out there. What you really want is to set up a pipeline that understands the market, knows its hardware, and can handle multiple proofs seamlessly. It’s all about getting those pieces to fit together without any bumps along the way. Teams that prioritize modularizing their setups now will find it way easier to ramp up their capacity later on. Instead of rewriting their whole rollup, they can just flip a configuration. Pretty neat, right?
Hey there! If you’re in the market for an architecture review or need help with a proof-throughput load test, 7Block Labs is here for you! We can assist you in setting up the broker layer, fine-tuning your recursion, and adding some redundancy with tools like Succinct, ZkCloud, or Aligned. And guess what? We can get all of this done in under six weeks! Let’s tackle this together!
Sources and further reading
Succinct is rolling out some exciting features with their SP1 Hypercube, which focuses on real-time proofs and hitting some big Prover Network milestones.
Cysic is jumping in as a multi-node prover.
Take a look at this: (theblock.co). It's pretty interesting!
Starknet's Stwo, along with Polygon’s Plonky3 and Polyhedra Expander, are totally breaking performance records!
I’ve got all the info you need right here: check it out at starknet.io!
The OP Stack is really turning heads lately, especially with its Stage-1 fault proofs and this cool multi-proof path. Asterisc and Kona are at the forefront of this movement, pushing things forward in a big way!
Learn more here: (optimism.io).
- So, OP-Succinct and Kailua are really getting into the whole ZK fraud-proof and validity scene. Check it out: (succinct.xyz). Aligned Layer is diving into verification with EigenLayer AVS! Check it out right here: (blog.alignedlayer.com). You won’t want to miss it! Lagrange is making its debut as the first actively validated service (AVS) using zero-knowledge tech on EigenLayer’s mainnet. Exciting times ahead! Take a look at this: (lagrange.dev). You won’t want to miss it! AggLayer is launching pessimistic proofs along with version 0. 3 execution-proof mode. Check out all the juicy details right here: (agglayer.dev). You won’t want to miss this! Yoimiya is really diving into some cool pipeline partitioning for ZK systems, and on the flip side, CrowdProve is all about that community proving action. It’s exciting to see these developments! Hey, check this out! You can find it here: arxiv.org. Hey there! So, ICICLE is launching some cool new libraries that work across both GPU and CPU platforms. Hey, take a look at this update: ingonyama.com. It's got some cool stuff! Hey there! Exciting news--ZkCloud, which you might remember was called Gevulot, is back and better than ever. They've launched a fresh proving network along with their new Firestarter platform. Pretty cool, right? Want the latest info? Check it out here: zkcloud.com.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

