7Block Labs
Blockchain Technology

ByAUJay

Running a Prover Network Client: Operational Notes for zkVM Teams in 2025

Who this is for

Leaders at startups and big companies looking into ZK infrastructure will want detailed, actionable notes to help them plan out pilots, create budgets, and establish SLAs for operating zkVM prover clients.


1) The 2025 proving landscape in one page

  • RISC Zero Boundless has made the leap from an incentivized testnet to a fully active proof marketplace. They're rolling out Proof of Verifiable Work (PoVW), using staking-based collateral, and handling on-chain settlement on Base/Ethereum. Provers get to stake ZKC, place bids on jobs, and there's even a penalty for missing deadlines; proofs are settled as tiny receipts on host chains. You can check out more about it here.
  • Succinct’s SP1 zkVM is now proving blocks on Ethereum L1 in real-time using a whopping 16 RTX 5090 GPUs--talk about power! They’re nailing 99.7% of blocks in under 12 seconds. Plus, SP1 just rolled out some security enhancements (“Turbo”), tackled vulnerabilities head-on, and put the brakes on old verifiers on mainnets. Read the full scoop here.
  • StarkWare just launched S-two, their next-gen STARK prover, on the Starknet mainnet. This replaces Stone and brings significantly lower latencies, paving the way for decentralized proving. For the deets, visit here.
  • We’re seeing some cool developments in AVS-based prover markets: Lagrange’s Prover Network is now running on EigenLayer with over 20 institutional operators. They're targeting ZK Stack provers (like ZKsync) and coprocessors. Speaking of which, ZKsync is also trying out decentralized proving through Fermah’s universal proof market and AVSs. Check out more about it here.
  • There’s been solid progress in proof aggregation and cross-stack interoperability: Polygon’s AggLayer has launched “pessimistic proofs” on the mainnet, utilizing SP1 to create ZK proofs for its unified bridge model. This is pretty important if you’re working with aggregated or bridged proofs across different stacks. Dive into the details here.
  • Finally, if you're worried about the costs of verification, zkVerify (by Horizen Labs) has debuted as a dedicated verification chain. They’re claiming over 90% lower verification costs compared to L1s, which is a big win for anyone dealing with large proof volumes. You can read more about it here.

Bottom line: By 2025, you'll have some cool options for outsourcing proof services. You can tap into a decentralized market, bring your own GPUs or FPGAs to earn some cash, or even mix it up with both approaches. Plus, you can offload verification to a specialized chain, which should help cut down on fees.


2) Choose your network(s) and client role

Most teams usually fall into one of three roles. Choose the one that fits your control and compliance needs along with your latency SLOs.

1) Market Prover (bring compute, earn rewards)

  • Boundless: Here’s the deal--run Bento (that’s their proving stack), stake your ZKC on Ethereum, accept bids on Base, and keep track of your work logs each epoch. If you miss a deadline, you're in for a hit: 50% of your posted ZKC collateral goes up in smoke, while the other half turns into a bounty. The cool part? Your rewards grow based on how much you stake and the “verifiable work” you deliver. Check out the details here.
  • SP1 Network: Want to get in on the action? You can enroll as an approved prover during the Stage 2.x testnet phases. Right now, they’re really looking for professional operators with datacenter GPUs, and hey, there are FPGA lanes coming soon too. For more info, take a look at this blog post.

2) Requester (buy proofs, integrate receipts)

  • Boundless: You can submit jobs to the market and get a SNARK/STARK receipt verified on your favorite chain (starting with Base/Ethereum). You’ll pay per job, and according to Blockworks, typical jobs were aimed at “< $30” during the testnet for some heavy zkVM computing. Check out more details here.
  • SP1: If you're looking to request proofs, you can do that via the network explorer or API. With well-tuned clusters, real-time block-level proving is totally doable! Dive into the details here.

3) AVS Consumer (Offloading to Restaked Operators)

  • Lagrange (EigenLayer AVS): This is a decentralized prover network that teams up with institutional operators. It’s designed to help decentralize ZK Stack provers or handle coprocessor workloads. According to ZKsync’s roadmap, they plan to direct a significant chunk of their proving tasks to these external operators. You can read more about it here.

If verification is eating up your budget, consider using zkVerify and only post attestations to multiple L1/L2s when necessary. (prnewswire.com)


3) Hardware blueprints that actually work in 2025

Use the Right Accelerators for Your Proof System and Latency Goals

When it comes to choosing accelerators for your proof system and hitting those latency targets, you’ve got to know where the "sweet spots" are right now. Here are some of the current favorites:

  • Real‑time SP1 proving (Ethereum L1 blocks): If you're using 16 RTX 5090s (with 32 GB GDDR7 each), you’re in great shape-- you'll get 99.7% of blocks done in under 12 seconds based on Succinct’s benchmark. This is pretty helpful for setting useful p90/p95 latency service-level objectives (SLOs) for exchanges and bridges. Check out more details here.

    • Here are some key specs for the RTX 5090: it comes with 32 GB GDDR7, boasts 21,760 CUDA cores, delivers around ~1.79 TB/s bandwidth, and has a 575 W TGP. Just a heads up: make sure to plan for an additional 1 kW PSU headroom for each card if you’re stacking them tightly. More info can be found here.
  • High‑utilization datacenter nodes: The H200 model, which packs 141 GB of HBM3e memory and has a staggering 4.8 TB/s bandwidth, is perfect for memory-bound provers that handle large witnesses and FRI polynomials. Plus, with MIG partitions, you can efficiently share GPUs across different queues. Get more info here.
  • FPGA lanes for SP1: The AMD Alveo U55C, utilizing AntChain OpenLabs, can deliver a performance boost of 15-20 times compared to traditional CPUs on SP1 v4.1.0 programs. Expect these to fit right into the Succinct Prover Network, helping to cut down on both costs and latency. Want to learn more? Head over here.

Host Specs for Production Pilots

Here’s what we’ve seen work well in our production pilots:

  • CPU: Aim for 32-64 cores using modern Xeon or EPYC processors. You’ll want around 256-512 GB of RAM, especially if you’re dealing with heavy witness handling and orchestrating multiple GPUs.
  • Storage: Go for 2-4 NVMe Gen4/5 drives set up in RAID0 to achieve speeds between 6-14 GB/s for your trace and witness temp data. It’s a good idea to keep your SRS and verifier keys on a separate NVMe to dodge any resource contention.
  • Network: Depending on your scale, 25-100 GbE is ideal for fetching states and witnesses. Remember, low jitter is crucial if you’re aiming for those sub-10 second SLOs!
  • Power & Cooling: A 16×5090 rack can pull around 9-12 kW, so make sure to plan for structured power and hot aisle containment to keep everything running smoothly.

Tip: If you've got a mixed fleet with H200s and 5090s, try to plan those memory-heavy circuits on the H200 units. For the hash/NTT-heavy shards, it's a good idea to use your gaming GPUs whenever you can.


4) Kubernetes patterns for provers

Kubernetes makes it super easy to scale multiple providers, shard queues, and share big GPUs without any worries.

  • To make the most of your GPUs, check out NVIDIA’s device plugin (along with the GPU Operator) for MIG with H100/H200 models. This lets you slice your GPUs into perfectly sized instances. When you’re setting up Pods, just request resources like nvidia.com/mig-3g.40gb. You can get all the details in the NVIDIA docs.
  • If you're working with non-MIG GPUs, such as the RTX 5090, consider enabling time-slicing. This allows you to run multiple tasks on the same card when your latency budgets are flexible. You can find more info on this in the NVIDIA sharing documentation.
  • Think about affinity and anti-affinity when spreading recursive shards. It’s a good idea to distribute them across different fault domains, like power phases and TORs. If you need to keep certain Pods close together on specific GPU UUIDs, go ahead and pin them; otherwise, try to keep your scheduling as adaptable as possible. For more insights, check out this GitHub discussion.

Example DaemonSet Snippet to Expose MIG and Stable UUIDs

Here's a handy DaemonSet example that helps you expose MIG (Multi-Instance GPU) and stable UUIDs. Check it out!

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: mig-uuid-exposer
  labels:
    app: mig-uuid
spec:
  selector:
    matchLabels:
      app: mig-uuid
  template:
    metadata:
      labels:
        app: mig-uuid
    spec:
      containers:
      - name: mig-uuid-container
        image: your-image-here
        env:
        - name: NVIDIA_VISIBLE_DEVICES
          value: "all"
        - name: NVIDIA_MIG_CONFIG
          value: "enabled"
        volumeMounts:
        - name: mig-uuid-volume
          mountPath: /etc/mig
      volumes:
      - name: mig-uuid-volume
        hostPath:
          path: /var/lib/mig

Feel free to tweak this snippet to fit your needs!

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nvidia-device-plugin-daemonset
spec:
  template:
    spec:
      containers:
      - name: nvidia-device-plugin
        image: nvcr.io/nvidia/k8s-device-plugin:stable
        args: ["--mig-strategy=mixed", "--device-id-strategy=uuid"]
        env:
        - name: FAIL_ON_INIT_ERROR
          value: "true"
        - name: PASS_DEVICE_SPECS
          value: "false"
        securityContext:
          allowPrivilegeEscalation: false

Docs and Profiles: H100 MIG Profiles, K8s GPU Scheduling

Check out the details on H100 MIG profiles and how to handle K8s GPU scheduling.


5) Boundless (RISC Zero): what running a prover actually looks like

What to Expect Operationally:

When diving into the operational side of things, here's what you can look forward to:

  1. Smooth Processes: You'll notice that we've streamlined workflows, making it easier for everyone involved.
  2. Clear Communication: Expect open lines of communication. We want to keep everyone in the loop, ensuring that no one feels left out or confused.
  3. Supportive Team: Our team is here to help! If you run into any bumps along the way, don't hesitate to reach out.
  4. Regular Updates: We'll keep you posted with regular updates, so you’ll always be aware of what's happening and where things stand.
  5. Feedback Opportunities: Your input matters! We'll be asking for your feedback now and then to keep improving our operations.
  6. Training Sessions: To get everyone on the same page, we'll hold training sessions. These will help familiarize you with new tools and processes.
  7. Problem-Solving Focus: We’re all about finding solutions. If issues come up, we’ll work together to sort them out effectively and efficiently.

With all of this in mind, you can feel confident stepping into this operational phase. It's all about collaboration and making things work smoothly!

  • Stake and Collateral:

    • You can stake ZKC on Ethereum to start earning PoVW mining rewards.
    • To get going, you’ll need to lock up about 10 times the job’s maximum fee as proving collateral on Base. Just a heads up: if you miss the deadline, half of it will be burned, and the other half turns into a bounty. (docs.boundless.network)
  • Software Stack: Bento is your go-to for a multi-tenant proving cluster, paired with the RISC Zero toolchain on CUDA nodes. Super handy! (docs.beboundless.xyz)
  • Epoch Cycle: Each epoch, you’ll need to send in your aggregated “work logs” to claim your rewards. The cool part? Your rewards will grow based on your stake and the compute power you deliver. (docs.boundless.network)
  • CLI Essentials:
# Tooling
rzup install risc0-groth16

# Enable ZK mining in Bento
export REWARD_ADDRESS=0xYourRewardsWallet

# Stake on Ethereum (via official portal), then run Bento workers
bento start --cluster <name> --gpus all

# Periodically claim rewards after epoch finalization
boundless-cli claim-mining-rewards

(docs.boundless.network)

  • Ethereum Block Proving with Zeth: Zeth handles L1 block proofs inside the RISC Zero zkVM, leveraging Reth’s stateless execution. Make sure to have an archival RPC handy to grab those block witnesses. And don’t forget to cache aggressively! (github.com)

SLA Guidance

When it comes to SLA guidance, kick things off with “job acceptance to proof settle.” The goal here is to aim for a turnaround time of 30 seconds or less on multi-GPU nodes for your typical workloads. If you’re focusing on something more specific, like Merkle or FRI offload pipelines, you can tighten that timeline even further.


6) Succinct SP1: hitting real‑time and staying secure

  • Latency: The SP1 Hypercube has shown some impressive numbers--99.7% of Ethereum L1 blocks are handled in under 12 seconds using 16× RTX 5090. In our pilot tests, we're aiming for a p95 of ≤ 12 seconds, but let's be prepared for a few exceptions with those “mega-blocks.” (blog.succinct.xyz)
  • Security: Keeping your upgrade paths in mind is super important. Succinct has been proactive, fixing and disclosing issues with SP1 v3/Plonky3. For production use, stick to SP1 Turbo+ and make sure to use pinned verifiers (we've even frozen routers to older contracts). Remember to check for version attestation in your CI before taking on new jobs. (blog.succinct.xyz)
  • Hardware acceleration roadmap: If you're looking at FPGA solutions, the AMD Alveo U55C lanes can give you a whopping 15-20× speedup over CPUs. Expect to see network operators implementing FPGA queues where it makes sense for latency and cost. (blog.succinct.xyz)
  • Operational tip: To optimize performance, try isolating recursion layers into separate K8s queues and place them alongside high-bandwidth NVMe. Don’t forget to use job annotations, like recursion.depth, to direct tasks to MIG slices that have enough memory.

Joining the network: Stage 2.x testnet is focusing on professional operators first. Keep an eye on the explorer for updates on onboarding windows and benchmarks. Check it out here: (blog.succinct.xyz)


7) AVS‑based networks (Lagrange, Fermah): decentralize proving via restaking

Why Enterprises Appreciate This

You get to enjoy operator diversity, cut down on conditions, and maintain baseline SLOs--all without having to juggle a ton of vendor contracts.

  • Lagrange ZK Prover Network: Just kicked off on the EigenLayer mainnet! Coinbase, OKX, Kraken’s Staked, Nethermind, and a bunch of others are stepping up to run provers. This network is all about decentralizing ZK Stack provers and managing zk coprocessor tasks. Check it out here.
  • ZKsync 2025 direction: They’re planning to shift a lot of the proving work to external AVSs like Lagrange and Fermah and will be integrating EigenDA for data availability. If you're diving into the ZK Stack, get ready for a mixed proving setup. More details can be found here.
  • Fermah: This is an exciting new universal proof market that’s powered by GPUs and FPGAs. They wrapped up a seed round in late 2024 and are now working with ZKsync for decentralized proof generation. You can read more about it here.

Vendor diligence: make sure to ask for public slash records, verified hardware configurations, and policies that ensure fair queue management you can audit.


8) Data plumbing: SRS, witnesses, and caches are your P0 bottlenecks

  • SRS sizes can really balloon. For KZG ceremonies, they could end up being several GBs; just look at Halo2, where the SRS needs between 2^25 and 2^26 for those larger circuits, which means provers can be using tens to hundreds of GBs of RAM. It’s best to keep your SRS on a dedicated NVMe drive and pin versions with checksums. (zkresear.ch)
  • When it comes to Ethereum witnesses for block proving, Zeth is going to need archival RPC to grab execution witnesses. To dodge those annoying cold-start misses, think about adding a prefetcher with a content-addressed cache, like a local RocksDB with S3 as a backup. (github.com)
  • For prover pipelines, pre-proving and input preparation often take the longest. External teams have shared that they managed to cut down pre-prove fetching and MPT building times from minutes to just seconds with some slick pipeline optimizations--so it’s a smart move to plan for a prefetch stage based on each job class. (hozk.io)

9) Verification costs and where to put them

  • When you verify on Ethereum, the gas costs for zk proof verification can skyrocket to hundreds of thousands during busy times. But here’s where zkVerify comes in--it claims to cut those costs by over 90% by shifting the verification process to its specially designed chain, and then it can easily confirm to your target L1/L2s. Definitely something to think about for high-volume applications. (prnewswire.com)

Example routing:

  • Start by proving on Boundless/SP1 → Then, verify on zkVerify → Finally, post the attestations to Ethereum/Base/Arbitrum, but only when you need to settle.

10) Security and compliance guardrails

  • Version pinning and attestation:

    • SP1: Make sure to enforce Turbo+; block those V2/V3 provers; and don’t forget to verify that your recursion circuits have completeness checks (thanks to the Turbo patch). Check out the details here.
    • Boundless: It's important to enforce a minimum Bento build hash across your fleet and require deterministic guest builds (think RISC Zero images) to keep image drift at bay. More info can be found here.
  • Economic risk: It’s crucial to wrap your head around the slashing math for Boundless (we’re talking about roughly 10× collateral and a 50% burn if you miss). Treat wallet operations as production keys, and make sure to separate your staking and reward wallets. You can read more about it here.
  • Isolation: For H100/H200, consider using MIG or dedicated bare-metal partitions for your 5090s, especially when dealing with sensitive inputs. For non-sensitive tasks, time-slicing should do the trick. Dive deeper into this here.
  • Formal verification posture: Keep an eye on SP1’s ongoing formal work, like their Lean proof components, especially when you’re moving compliance-critical logic into the zkVM. For more insights, check this out here.

11) Practical deployment examples

A) Boundless Bento on a 4×GPU box (Ubuntu 22.04, CUDA 12, RTX 5090)

# 1) Drivers & CUDA (omit here for brevity). Verify nvidia-smi shows 4x RTX 5090.
nvidia-smi

# 2) Install toolchains
curl -L https://rzup.risczero.com/install.sh | bash
rzup install risc0-groth16

# 3) Install Bento and set rewards wallet
export REWARD_ADDRESS=0xYourRewardsWallet
bento init --cluster=prod-a
bento start --cluster=prod-a --gpus all

# 4) Stake ZKC (once) on Ethereum via the official staking portal
# 5) Periodically submit work logs and claim rewards after epoch finalization
boundless-cli submit-worklog
boundless-cli claim-mining-rewards

Operational Notes

  • Make sure to post at least one aggregated work proof during each epoch.
  • Keep an eye on the on-chain job deltas.
  • Maintain high Base RPC quotas to steer clear of any settlement delays. Check out the details here.

B) SP1 Prover Queue with MIG on H200 Nodes (K8s)

apiVersion: batch/v1
kind: Job
metadata:
  name: sp1-core-shard
spec:
  template:
    spec:
      restartPolicy: OnFailure
      containers:
      - name: prover
        image: ghcr.io/succinctlabs/sp1-prover:turbo
        resources:
          limits:
            nvidia.com/mig-3g.40gb: "1"
        env:
        - name: SP1_MODE
          value: "hypercube"
        - name: SP1_PROVER_VERSION
          value: "4.1.0-turbo"
        - name: CACHE_DIR
          value: "/mnt/srs-cache"
        volumeMounts:
        - mountPath: /mnt/srs-cache
          name: srs
      volumes:
      - name: srs
        emptyDir: {}

Aim for a p95 of less than 12 seconds, but only on those top-tier 5090 clusters. When dealing with MIG’d H200, be sure to match the shard sizes with the MIG profiles and keep an eye on memory pressure. Check out more on this topic here.

C) Verifying off‑L1 to cut costs (zkVerify sidecar)

Here's the deal: you can post proofs to zkVerify, grab the verification result, and then attest it to your target chains. By switching from direct L1 verification, you could save over 90%! Check out more details here.


12) SLAs and dashboards that matter

Keep an eye on these for each queue:

  • Ingest → Let's take a look at proof receipt latency for different job classes, like block proving versus custom zkVM. We want to check out the p50, p95, and p99 stats.
  • Keep an eye on prover throughput measured in kHz--SP1 uses these kHz metrics for FPGA studies. It's also useful to track cycles/sec for each GPU/FPGA to catch any regressions. (blog.succinct.xyz)
  • Look into witness cache hit rates, SRS cache I/O latency, and NVMe write amplification.
  • Don’t forget about on-chain settlement latency and potential failure modes, like RPC timeouts and gas spikes.
  • Lastly, let’s assess the economic health by checking the posted collateral at risk (Boundless), slashing events, realized rewards, and GPU utilization.

13) Cost model (worked example)

  • Real-time SP1 proving for L1 Ethereum blocks (16× 5090 cluster):
    • Capex: So, we’re looking at 16× 5090s at around $2,000 each, which totals up to $32k, plus you’ve got to consider the hosts, rack, and power costs. On the opex side, it’s continuous power use around 9-12 kW, along with datacenter fees. When it comes to throughput, we’re seeing p95 under 12 seconds for 99.7% of blocks in Succinct tests. (blog.succinct.xyz)
  • Boundless proof jobs: In testing on the marketplace, pricing varied quite a bit; public guidance hinted at rates below $30 for those hefty zkVM tasks. As more GPUs come into play and PoVW settles down, we can expect those spreads to tighten up. (blockworks.co)
  • Verification: We should shift repeat verifications over to zkVerify for savings of 90% or more; let’s keep L1 verification just for checkpoints. (prnewswire.com)

Tip: Think of pre-proving and data fetch as key areas where you can save money; a solid witness cache often “saves” you more than just upgrading your GPU.


14) Two 30‑day pilots we recommend

  • Pilot A (market prover): Get Bento up and running on a 4× 5090 node, put a small amount of ZKC on the line, and take on some low-risk jobs. This way, we can check out how PoVW rewards stack up against power costs. Once you’ve got that down, throw in a second node to see how it scales horizontally. The goal? Keep the SLA from acceptance to settlement at p95 ≤ 45 seconds. (docs.boundless.network)
  • Pilot B (real-time requester): Let’s use the SP1 network to prove live Ethereum blocks for our internal dashboard. We’re shooting for a p95 of under 12 seconds with a tuned 16× 5090 cluster, or we might just buy some extra capacity from a network operator. Oh, and don’t forget to add zkVerify to help cut down on verification costs! (blog.succinct.xyz)

If you're using the ZK Stack, give Lagrange/AVS routing a shot for a few of your chains. This will help you see how it impacts decentralization and operator diversity. Check it out here: (lagrange.dev).


15) Decision checklist for CTOs

  • Governance and risk
    • Which slashing and penalty models from different networks can we work with? For example, Boundless has a 50% burn on misses, while AVS uses slashing through EigenLayer. And who’s in charge of signing the keys? (docs.boundless.network)
    • Do we really need formal guarantees like SP1 Turbo and the ongoing formal work, or are audits and testing enough for our needs? (blog.succinct.xyz)
  • Latency budget
    • Should we aim for a p95 target of less than 12 seconds? That means we’ll need to budget for 16× 5090 or maybe just buy capacity. If not, we might be looking at 30 to 120 second windows and could consider using cheaper fleets. (blog.succinct.xyz)
  • Data plane
    • Are we in control of our archival RPCs and witness caches, which are necessary for Zeth and stateless execution, or are we leaning on third parties for that support? (github.com)
  • Verification spend
    • Should we keep our verification processes on L1, or is it smarter to switch to zkVerify for bulk? Let's model out scenarios that show we could save up to 90%. (prnewswire.com)
  • Scale and portability
    • Let’s consider using MIG/time-slicing for our mixed queues. We should also keep the SP1 and Boundless images pinned and ensure we’re attesting builds; plus, we need to make sure we separate our staking wallets from the reward ones. (docs.nvidia.com)

Appendix: Notes by ecosystem

  • Starknet: Exciting news! S-two has officially launched on the mainnet and it’s significantly faster than Stone. The roadmap is looking good too, with plans for decentralized proving and validators voting on blocks set for the end of 2025. Plus, they’re aiming for client diversity. Check it out here.
  • Polygon AggLayer: Great updates here as well--pessimistic proofs are now live on the mainnet. They’re using SP1 in the pipeline. Just a heads up, if you’re bridging across CDK/Sovereign stacks, you’ll want to make sure everything aligns on proof formats and the custody of aggregation keys. Find more details here.
  • ZKsync: On the decentralized proving front, they’ve got Fermah/Lagrange and optional EigenDA in play. If your chain runs on the ZK Stack, be prepared to route proving to external AVSs. Dive deeper into the topic here.

The takeaways

  • Prover networks are set to go live in 2025, and there's a great chance for you to earn with your GPUs or FPGAs. You can either buy proofs with solid SLAs or dive into both options. Check it out here: (blockworks.co).
  • While hardware is important, having the right pipelines is even more crucial. Before splurging on more GPUs, consider investing in witness/SRS caching and getting your Kubernetes scheduling in line. Take a look at this for more info: (github.com).
  • Security is an ongoing task that requires your attention. Make sure to pin your versions (like SP1 Turbo), push for on-chain slashing transparency, and regularly check that your build is consistent. Here's a good resource to help you out: (blog.succinct.xyz).

7Block Labs is here to help you design those pilot projects, strengthen your GPU fleets, and connect you with reliable operators across Boundless, SP1, and AVS networks.

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.