7Block Labs
Blockchain Development

ByAUJay

Best Practices for Future-Proofing Rollup Proof Throughput


Why this matters now

In 2025, we saw some major shifts in how we approach throughput and the expenses tied to rollup proofs.

So, on May 7, 2025, Ethereum dropped the Pectra mainnet upgrade, and let me tell you, it was a total game changer! So, it basically doubled the average blob capacity, which is pretty cool, all thanks to EIP‑7691. Plus, they increased the max number of blobs you can have in a block to 9! And let’s not forget about the repricing of calldata from EIP‑7623. All these changes are really shaking things up in terms of how data availability works, putting more emphasis on blobs. Exciting stuff! If you're curious to dive deeper into it, you can check it out here.

Oh, and there’s more! Pectra has also rolled out the EIP‑2537 BLS12‑381 precompiles. This update really cut down on the costs for verifying modern SNARKs and BLS signatures on-chain. It gives teams a way more secure alternative to BN254, and they’ve managed to keep gas fees pretty competitive too! Check it out here.

Things are really kicking off in the realm of optimistic rollups! Arbitrum just dropped its BoLD on the mainnet, which means we can now enjoy permissionless validation. Plus, OP Mainnet is making big strides, with its fault proofs hitting Stage-1 decentralization. Exciting times ahead! What this really means is that we're noticing some impressive boosts in actual output, all while keeping safety in check. Want to dive deeper? Check out The Block for all the juicy details!

Oh, and we can't overlook ZK rollups and zkVMs! They've really taken off lately, especially when it comes to boosting prover speed and offering more decentralized options. It's exciting to see how far they've come! For example, Boojum has started using a consumer GPU approach, Plonky3 is smashing some serious CPU throughput records, and StarkWare is reaching new levels with its Stwo recursion benchmarks. It's pretty exciting to see all this progress! If you’re interested in exploring more, check it out here. You might find some pretty cool stuff!

Check out this simple, number-focused guide from 7Block Labs! They’ve put together some tips to help clients increase their throughput when working with ZK and optimistic stacks.


The 2025 baseline: what changed and the raw numbers

So, the blobs are made up of 4096 field elements, and each of those is 32 bytes. When you put that all together, you're looking at about 128 KiB. So, with EIP‑7691 in the mix, Ethereum is shooting for 6 blobs per block, but they could go up to 9 at most. Pretty interesting stuff, right? So, if we break it down into those roughly 12-second chunks, we’re looking at about 64 KiB/s when it comes to average data availability bandwidth. That’s basically 6 times 128 KiB split over those 12 seconds. Pretty straightforward, right? If you want to dive deeper into it, you can find more info here.

  • Alright, let’s dive into calldata! So, there's a new floor price set at 10/40 gas per byte. This move is all about keeping those worst-case execution layer payload sizes in check. Plus, it’s a little nudge for developers to start using blobs for data availability rather than just relying on calldata. Big shoutout to EIP‑7623 for making this happen! When you're putting together your data availability strategy, I recommend starting with a “blob-first” approach. This is especially true unless you're working with some smaller control paths. If you're looking for more info, check it out here!
  • Next up, we’ve got the BLS12-381 precompiles (thanks to EIP-2537), and they’re pretty cool. They introduce some awesome curve operations, multi-scalar multiplication (MSM), and pairings that really enhance what we can do. So, when it comes to the costs for pairing checks, we're looking at 32,600·k plus 37,700 gas. Just to give you a comparison, BN254 comes in at 34,000·k plus 45,000. And hey, we’ve got to give a little nod to EIP‑1108 for that! Here's the exciting part: the level of security gets a major upgrade, jumping from around 80 bits to over 120 bits! So, here’s the deal: you can go ahead and target BLS12-381 without worrying about any gas penalties. It's all good! Dive deeper here.

Hey, here’s some good news! The Arbitrum BoLD dispute protocol has officially kicked off on One/Nova. Plus, OP Mainnet is all set to go with those governance-approved, permissionless fault proofs. Exciting times ahead! Hey there! If you're working on these platforms, you're in for a treat--now you can actually experience fault and fraud-proof throughput! It's not just theory anymore. Want to dive deeper? Check it out here. Happy coding!


Define “proof throughput” precisely (so you can scale it)

Absolutely! Here’s a handy guide for keeping an eye on each of those four layers separately:

1) Proving Layer

Alright, let’s dig into the proofs per second (PPS) for each circuit. We’ll take a closer look at the execution trace, the state root, and the stages for the aggregator and recursion. Hey, make sure to take a look at the p50/p90/p99 proof latency for all the different job categories. Let's make sure to keep tabs on those bottleneck kernels--you know, things like FFT/NTT, MSM, and Merkle hashing. It’s important to check in on how often they're being used and how much time they're taking up.

2) Aggregation/Recursion Layer

Hey, don't forget to check the aggregation depth for each window! For instance, you might want to look at how many leaf proofs there are per aggregator and how many aggregators you’ve got for a recursive wrap. Don't forget to keep an eye on the time-to-final aggregated proof (TFA) too! It's super important to track that.

3) Data Availability (DA) Layer

Keep an eye on the blob fill rate--basically, how much of the blob's space is actually being used. Also, check out the number of blobs in each batch and see how our blob fees stack up against our goals. Hey, just a quick reminder to keep an eye on the calldata fallback count and bytes. After EIP-7623 goes live, we really want those numbers to drop down to zero.

4) L1 Verification Layer

Let's make sure we're keeping an eye on gas per verify--that means not just the base cost but also the pairing costs. We should also track how many bytes we're posting, whether it's blob data or on-chain metadata. And don’t forget to check out the rates for any failures and retries!

Alright, let’s connect those Student Learning Outcomes (SLOs) with the metrics we’ve got. For example, we’ve got “99% of batches taking 6 minutes or less,” which is pretty impressive! Also, “99% of combined proofs checked on Layer 1 within just 1 slot after they become available” shows we’re on top of things. Plus, we’re looking at an average blob fill at or above 85%. These numbers really highlight our efficiency and performance! ”.


Pattern 1: DA‑aware batching after EIP‑7691

What to do now:

  • Try using batching for blobs instead of relying on calldata. When you're figuring out the size of your batches, aim for about 1 to 3 blobs each time. It's a good rule of thumb to keep things manageable! This should help you reach over 85% capacity while also keeping the proof size and verification costs in check. If you come across a situation where a batch is spilling over into an extra blob that’s only partially filled, consider just holding onto it for the next slot. Alternatively, you might want to boost the aggregation depth a bit to help distribute those L1 costs more evenly. It could make things easier down the line! (eips.ethereum.org).

Alright, let’s figure out your DA headroom with the new target in mind. Picture this: if you're averaging about 6 blobs per block, and each one is roughly 128 KiB, you’re looking at around 768 KiB every 12 seconds. Not too shabby, right? If the raw data from your chain is consistently higher than what's typical, it could be a good idea to look into multi-DA options, like EigenDA or Celestia, while still settling on Ethereum. (eips.ethereum.org).

If you really need to use calldata--like for those smaller control proofs or keeper heartbeats--just remember to plan your costs around the floor pricing from EIP-7623. And keep those payloads nice and compact, ideally just a few kilobytes. That way, you’ll stay on budget! (eips.ethereum.org).

Signals from alt‑DA:

Celestia is really stepping up its game with blob usage lately. Those big mints have been creating some pretty thrilling spikes! This really highlights the awesome flexibility we have with off-Ethereum data availability, which is super useful for Layer 3 and those app-specific rollups. It’s a game changer! Check it out here.

  • EigenDA is officially up and running with a "free tier," and it's already showing off some really impressive synthetic and peak throughput capabilities! They’ve also implemented dual-quorum security and have V2 up and running now. You can check out those milestones over on L2BEAT! Don't forget to keep an eye on its burst capacity while the proofs are wrapping up on Ethereum! Take a look here.
  • If your rollup is working on something, just know that it’s processing... 1. With Ethereum blobs, you can handle about 3 blobs per minute, which means you’re looking at roughly 8 MB of compressed data coming your way each minute. You're looking at an upload speed of 84 MB per minute, which breaks down to around 768 KiB every 12 seconds. Pretty straightforward, right? Sticking with just the blob for now is totally fine! Just keep an eye on things. If you notice your blob usage climbing above 75% during those crazy busy times, it might be worth thinking about switching to alt-DA.

Pattern 2: Move verification to BLS12‑381 now

Why:

EIP‑2537 really puts BLS12‑381 front and center. It's not only practical, but it also turns out to be more budget-friendly than a lot of folks realize! So, if we dive into the gas costs for pairing checks, it comes out to about 32,600 times k, plus an extra 37,700 for BLS12-381. When we look at BN254 pairing checks, they come in at about 34,000·k + 45,000. Alright, so here’s the deal: if you're working with a 3-pairing verifier--kind of like Groth16--you'll be spending about 135,500 gas if you're using BLS12‑381. On the flip side, if you go for BN254, you're looking at around 147,000 gas. Pretty neat comparison, right? And here’s the interesting part: BLS12-381 gives you stronger security margins. If you want to dive deeper, head over to the full details at (eips.ethereum.org). You’ll find everything you need there!

How:

If you're using a prover that works with BN254 Groth16 or Plonk, you can totally add a recursive wrapper. This handy little tool will help you turn your proof into a BLS12-381 proof, which is perfect for L1 verification. If you're working with Plonky3 or KZG-style setups, feel free to jump right into compiling those verifying keys for BLS12-381 from here on out. Hey, if you want to dive deeper into this topic, take a look at this link: polygon.technology. It has all the info you need!

  • Make sure to hold onto the old verifiers as a backup until you’re completely confident about the migration. Better safe than sorry, right? You know, it might be really helpful to set up a canary path to test things out. Just keep it limited to a few batches and run it over the week. That way, you can verify everything without going overboard.

Bonus: Hey there! So, if you’re interested in referencing blob commitments on-chain, like proof-carrying data commitments, you’re in luck! You can easily mix this with EIP‑4844’s KZG point evaluation precompile and the BLOBHASH opcode. Pretty neat, right? This really sets it apart as a top-notch on-chain flow! Take a look at this: (eips.ethereum.org). You might find it interesting!


Pattern 3: Recursion and aggregation that actually scale

What the new benchmarks tell us:

Polygon's Plonky3 is really making a name for itself with its incredible ability to handle millions of Poseidon hashes every second--even on laptops! And on top of that, it's pulling off some impressive server-side performance too. This really opens the door for deep recursion to be a viable choice for a lot of different applications. Check it out here.

  • On the other hand, StarkWare's Stwo, which is built on Circle-STARK, is managing to pump out about 500k to 600k hashes per second using regular CPUs. Pretty impressive! They're getting ready to kick off production-grade recursion pretty soon, which is awesome! This will really help with parallel proving and make sure those aggregation windows stay nice and concise. If you want to dive deeper into it, you can check it out here.

Practices We Recommend:

Alright, here’s the plan: let’s break it down into 2 or 3 layers. First, we'll kick things off by looking at those leaf proofs, which are basically your execution traces. Then, we'll move up a notch to the mid-level aggregators for each time window. Finally, we’ll tie everything together with a solid proof for your L1 submission window. Sound good? Hey, don't forget to tweak your windows based on your latency Service Level Objectives (SLO). It's super important!

Make sure to keep those wrap verifiers nice and lighthearted with BLS12-381! It’s a good idea to handle heavy checks, like those big MSMs, at the recursion levels rather than putting all that pressure on the on-chain verifier.

Hey there! If you’re aiming for that Nova/HyperNova-style IVC, just make sure to take a moment and verify that your curve cycles and commitment scheme choices match up with what you’re trying to achieve on your final L1 goal--whether that’s BLS12‑381 or BN254. Oh, and don’t forget to consider the type of trusted setup you’re working with too! If you want to dive deeper into the details, check out this GitHub page. It has all the info you need!


Pattern 4: Multi‑proof, multi‑prover architectures

Why:

Using heterogeneous proofs can really help reduce the chances of facing correlated failures. This way, you can mix in some faster and more cost-effective provers with those that are a bit more cautious. It’s all about finding that sweet spot! Taiko is already rocking a multi-proof architecture that blends Succinct with RISC Zero and currently utilizes SGX. They're also planning to dive into even more zero-knowledge proofs down the line! If you want to dive into the details, just click here. Happy reading!

How to Deploy:

First off, think about creating a policy such as, “We need N of M proofs to be validated, and at least one of those should be zero-knowledge.” A great way to get started is by implementing ZK for just a small percentage of blocks at first. Then, you can slowly ramp it up over time, much like what Taiko did. It’s a smooth approach that allows for a more manageable transition! If you want to dive into the details, you can check them out here. It’s pretty interesting stuff!

If you want to scale up effectively, it might be a good idea to look into decentralized proving marketplaces. They're pretty handy for that! For example, Succinct’s Prover Network is already up and running on the mainnet. It features an auction-driven market that spans more than 1,700 different programs. Pretty impressive, right? On top of that, we have RISC Zero Bonsai, which offers managed parallel proving and boasts an impressive accuracy rate of 99. 9% uptime. If you're looking for more details, check it out here. It's got a lot of great info!

Procurement Reality

Hey there! So, if you notice that your p95 proving queues are piling up for over 30 minutes during those busy peak times, it might be worth thinking about bursting to a prover network. Trust me, it could save you a bunch of money on GPUs that would just end up sitting idle most of the time.


Pattern 5: Hardware you can actually buy and run

What’s deployable today:

  • zkSync’s Boojum is all set to run on consumer GPUs! If you're thinking about diving in, the docs recommend that you have at least 6 GB of VRAM if you want to aim for low TPS. But hey, if you don’t have a GPU to work with, there’s a CPU-only option you could use, which needs a hefty 128 GB of RAM for testing. Just a heads up! When it comes to production, it’s a solid plan to budget around 16 GB of GPU VRAM if you want to maximize your performance. So, when you’re getting your fleet ready, it’s worth aiming high! If you’re curious about their details, you can find them right here. Take a look!

If you’re looking to boost your performance, you should definitely check out GPU acceleration libraries like Ingonyama's ICICLE and Boojum-CUDA. They’re pretty awesome for making everything run faster! They can really give your FFT/MSM performance a nice little lift, making MSM run more smoothly and helping to minimize those annoying tails. Check out all the details here! You won’t want to miss this update!

  • You know, ASIC and FPGA acceleration is really starting to shine these days. We're really excited about the progress we're making! We've noticed some amazing speed boosts--like 10 to 100 times faster--on certain kernels using ZK-specific ASICs. Plus, FPGAs are definitely holding their own when it comes to MSM and NTT tasks. It's pretty cool to see all this technology working together! Some of these are actually beginning to link up with decentralized prover networks. Just keep in mind to weigh these options when your tasks are steady and you can count on them happening regularly. If you're looking for more details, you can check it out here.

Capacity Planning Tip:

Start with one A-class consumer GPU for every 15 to 30 transactions per second (TPS) when you're working with ZK-EVM workloads. After that, just keep an eye on how things are going. Hey there! So, if you notice that your p95 job wait time creeps past the 2-minute mark, it's a sign that you should consider adding an extra GPU. For every extra 10 TPS (transactions per second), that’s the way to go! Keep doing this until your p95 is hanging out below 60 seconds. Happy optimizing!


Optimistic rollups: raise the ceiling without breaking safety

Great news! Arbitrum BoLD is now live on One/Nova! This means you can dive into permissionless validation, and we've got a set dispute time in place. This is a big move toward really hitting that Stage‑2 milestone! If you're diving into Orbit chains, I recommend taking Offchain Labs' advice to heart. It's a smart move to kick things off with permissioned validation and then gradually transition to more open setups. If you want to dive deeper into this, you can find all the juicy details here.

Hey everyone! Exciting news--OP Stack’s Cannon fault proofs are officially up and running! Plus, there’s a cool roadmap in the works for 64-bit support and multi-threading, which is going to help safely increase those block gas limits. Awesome stuff ahead! If you want to boost those limits but still keep everything legit, it's a good idea to time your upgrades with the Cannon enhancements. If you want to dive deeper into it, check it out here. There's some interesting stuff waiting for you!

Practical Governance Note:

Hey folks! Just wanted to share a quick update with you all. L2BEAT has decided to implement a new rule: we'll be sticking to a minimum 7-day challenge period for Stage-1 optimistic rollups. And yes, that applies to those “grace” periods as well! Just a heads up, make sure your withdrawal and challenge setups match up with this, or you could end up getting a downgrade. Better to be safe than sorry! If you want to dive deeper into the details, check it out here.


L1 gas math you can budget today

  • When it comes to the BN254 pairing (EIP‑1108), you’ll want to keep in mind that it runs about 34,000·k + 45,000 gas. The BLS12-381 pairing (you know, the one from EIP-2537) will set you back about 32,600 times k plus an extra 37,700 gas. So, about the KZG point-evaluation precompile (EIP-4844) - it costs a straightforward 50,000 gas for each call. Simple as that!

So, if you're diving into a 3-pairing verifier along with some bookkeeping, you can generally expect the pairing portion to take around 135.
So, when you look at BLS12-381, you're seeing about 5,000 gas. On the flip side, BN254 is running up around 147,000 gas. Quite a big difference, right? So, this gives you a pretty cool advantage when you’re thinking about making the switch to BLS12‑381. This is definitely a great way to boost your security without having to worry about any gas taxes. If you want to dive deeper, you can find more info right here: (eips.ethereum.org).


Two concrete playbooks

A) ZK Rollup (zkEVM-style) Aiming for Sub-5-Minute Finality

  • Batching: Try to aim for 1 or 2 blobs in each batch, and give yourself a 30 to 60-second window to get it done. Hey, if you notice that the blob fill is sitting below 70% for three consecutive windows, let’s go ahead and extend that window by an extra 15 seconds. (eips.ethereum.org).
  • Pro Tip: It's a good idea to make sure your GPU pool is at least double what your highest batch concurrency is. So, you'll want to turn on recursive aggregation every 2 to 4 batches. Just make sure to keep the wrap proof nice and compact, sticking with the BLS12-381 setup. (eips.ethereum.org).
  • Verification: Start up a BLS12‑381 verifier and keep BN254 around as a backup for a week or two, just in case. While you're at it, try doing a dual submission for about 5% of your batches during that time. It could really help out! (eips.ethereum.org).
  • Overflow: Let's create a decentralized marketplace for provers to manage any overflow situations. It would be smart to set a price cap for those peak times to keep everything in check. Hey there! Just a quick reminder to monitor the p95 queue latency and the PPS for each circuit. It’s important to keep an eye on these to ensure everything’s running without a hitch! (theblock.co).

B) OP Stack Chain Aiming for Better Throughput

  • Upgrade Path: We’re excited to kick things off by introducing governance-approved fault proofs (Cannon) to Stage-1 first! After that, we'll take a look at BoLD-like bounded disputes if and when we get some support for the stack.
    If you want to dive deeper into this topic, take a look at this post over on the OP Labs Blog. It’s got all the juicy details you need!
  • Gas Limit: We’re planning to increase the MAX_GAS_LIMIT, but we’ll hold off until we’ve added those Cannon upgrades. You know, the ones that give us 64-bit support and multithreading capabilities. This way, we can keep things under control when it comes to the proofs. Got any questions? Check out the Governance Forum for all the details. It’s a great place to get more info!
  • Data Availability: Once EIP-7623 is released, we’ll be moving our data to blobs. This means that we’ll keep the calldata open just for the crucial control paths. If you're curious and want to dive deeper, take a look at the details on EIP-7623. It's a great resource for getting more insight!

What’s next (6-18 months): prepare now

Hey, good news! Blob capacity is getting a boost, all thanks to some great work coming out of the active peerDAS research. Just a heads up, make sure your batcher can keep up with any changes to your target or max limits. Also, don’t forget to watch the blob pricing compared to other data alternatives in real-time. It can save you some headaches down the line! If you're interested in learning more, just click here to dive deeper into it!

We can expect that more networks will start incorporating multi-proof requirements, like that “ZK-or-fallback” approach. "Looks like more and more apps are starting to adopt BLS12-381 for verification, which is pretty cool!" Make sure your proving and verification processes are modular. This way, you can easily switch out systems without having to rewrite your whole application. It’ll save you a ton of time and hassle! If you’re interested in diving deeper, check this out here.

Hey, have you noticed how decentralization of provers is really gaining momentum? It's pretty cool! Marketplaces like Succinct and services like Bonsai are now becoming a regular part of the setup. Exciting times ahead! It’d be a smart move to have your procurement and security teams get up to speed on these right away. If you want to dive deeper into this, you can check out the details here.

ZK proving is getting faster all the time, especially with cool new developments like Plonky3 and Stwo. It's exciting to see how quickly things are moving! Be sure to take a look at your recursion depth every few months. You might find some ways to simplify things and save a bit of money while you’re at it! If you're looking for the most recent updates, check this out here.


Implementation checklist (that we hold teams accountable to)

  • Data Availability Alright, when it comes to data availability, let’s stick with blobs. Our goal is to maintain the blob fill target at 85% or higher. If it goes below 70% for five consecutive windows, let’s make sure to set up an alert. (eips.ethereum.org). Hey team! After EIP-7623, let’s aim to keep our calldata usage under 1% of the data availability bytes every month. It’ll help us stay efficient and on track! (eips.ethereum.org).
  • Provers We’ve got to find the sweet spot for our GPU pool so that the wait time for the 95th percentile stays below 60 seconds. Let's definitely make sure we get those per-circuit PPS dashboards in place! Alright, folks, let's dive into some recursion! We're going to keep the final wrap verifier focused solely on BLS12-381 pairings. (eips.ethereum.org). Alright, let’s get started on setting up bursts to link up with a decentralized prover network. We’ll make sure to include a spending cap and add those proof integrity checks to keep everything secure. Sound good? (theblock.co).
  • Verification Let's go ahead and move the verifiers over to EIP-2537. Alright, let's go ahead and figure out the gas deltas using BN254. We'll also want to keep an eye on the actual per-verify gas stats on the mainnet as we go along. Sounds good? (eips.ethereum.org).
  • Optimistic Safety (if it fits the situation).
  • You might want to consider going with BoLD/Cannon since the vendors recommend it. Let’s make sure we set up some solid challenge periods--think at least a week or more. Also, it’d be a good idea to create an emergency runbook for any disputes that might pop up. This way, we’ll be ready to tackle whatever comes our way! (docs.arbitrum.io).

Appendix: quick reference facts for 2025 planning

  • EIP-7691 (Pectra): The goal here is to have 6 blobs in each block, but there’s a cap at 9. Check it out here.
  • Blob Size: So, each blob contains 4096 field elements, and since each one is 32 bytes, that adds up to roughly 131,072 bytes, which is about 128 KiB. Pretty neat, right? More details here.
  • EIP‑7623 Calldata Floor: If you're dealing with data-heavy transactions, you can anticipate shelling out about 10 to 40 gas for each byte. Learn more here.
  • EIP-2537 BLS12-381 Precompiles: So, these guys are located at addresses 0x0b to 0x11. When it comes to pairing costs, it's calculated as 32,600·k + 37,700. Full info here.
  • EIP-1108 BN254 Pairing: So, the cost for this is going to be 34,000 times k plus an additional 45,000. Find out more here.
  • KZG Point-Evaluation Precompile: So, with EIP‑4844, you can find this one at 0x0a, and it sets you back about 50,000 gas for each call. You can check out all the details over here!
  • Arbitrum BoLD: Great news--it's officially live on the mainnet and now has a permissionless validation path! You can get all the details in the announcement here.
  • OP Mainnet Fault Proofs: They’re officially up and running now, which is a big step forward for Stage-1 decentralization. Exciting times ahead! If you want to dive deeper into this topic, you can check out more details here.
  • Boojum: Great news! It's now able to run on regular consumer GPUs. The documentation breaks down different options based on throughput, with choices ranging from 6 to 16 GB of VRAM. If you want to get into the nitty-gritty, check it out here. It’s got all the details you’ll need!
  • Prover Marketplace: Take a look at some cool examples, like the Succinct Prover Network on the mainnet and RISC Zero Bonsai, which is managed. They’ve got some interesting stuff going on! Get the scoop here.
  • Prover Performance Highlights: Check out the latest reports from Plonky3 and Stwo benchmarks--they really illustrate that there’s a lot of potential for improvement in recursion as we gear up for 2025! More details here.

Hey there! If you're in the market for a 7Block Labs architecture review or need a little boost with your proof-throughput, you’re in the right place! We’re here to help you benchmark your circuits, tweak your GPU fleet for maximum efficiency, and get you set up with a blob-first batcher. Plus, we can handle the BLS12-381 migration with those dual-submit canaries. And the best part? We can wrap it all up in under four weeks! Let’s get started!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.