7Block Labs
Blockchain Services

ByAUJay

verifiable data services: Operating Model for 24/7 Monitoring and Incident Response

Summary: This guide provides a solid operating model for verifiable data services (VDS) that operate continuously. It covers everything from on-chain data feeds to cross-chain messaging and verifiable credentials. You'll find specific metrics, playbooks, staffing patterns, regulatory timelines, and tool options to help minimize mean-time-to-detect and mean-time-to-recover, all while maintaining cryptographic guarantees from start to finish.

Why this matters now

If your protocol relies on “verifiability”--like cryptographic proofs, signed oracle reports, verifiable credentials, or cross-chain attestations--then you need to make sure that monitoring and incident response treat data integrity as a top priority, not just uptime. The last two years have seen some big changes: Ethereum's Dencun upgrade rolled out ephemeral data “blobs,” rollups are putting more weight on DA layers, Solana has shown off its high throughput but also experienced a 5-hour halt in 2024, and the verifiable credentials just hit W3C Recommendation in 2025. These shifts have a big impact on how you detect, triage, and bounce back from data issues, especially at 2 a.m. on a Sunday. (ethereum.org)


What is a Verifiable Data Service (VDS)?

A VDS, or Verifiable Data Service, is basically any service that provides inputs which can be verified with cryptography to support on-chain logic or enterprise systems. Here are some of the usual components:

  • Get your hands on Oracle-grade market data (whether you prefer push or pull) that's got on-chain verification.
  • Use cross-chain messages and tokens, complete with solid defense-in-depth strategies like CCIP and robust risk controls.
  • Leverage verifiable credentials (VCs) and attestations for users, devices, and assets.
  • Make sure you have data-availability backends (like blobs, Celestia, or EigenDA) that your app can count on to be retrievable or clearly available.

Your operating model should keep an eye on the following:

  1. the validity of cryptographic data,
  2. whether your data is accurate compared to trusted references,
  3. how fresh the data is and any latency issues, and
  4. the liveness and finality of the underlying chains and DA layers.

The architecture you must observe 24/7

Think of it like this: there are four layers, and each one comes with its own unique signals, Service Level Objectives (SLOs), and playbooks.

  1. Integrity and Crypto Verification
  • We’re aiming for a signature/proof verification error rate of less than 1e-6 every 24 hours for those hot paths.
  • Keep an eye out for any failures related to VC proof or issuer DID resolutions, including status-list checks and making sure we can reach revocation lists.
  • Don’t forget about zk/KZG proof verification failures where it matters, like with rollup verification contracts and blob KZG commitments.
  1. Correctness and Divergence
  • Take a look at the cross-source price divergence windows (p50/p95/p99) that pop up across different independent feeds.
  • We’ve got our sanity guards in place to tackle venue outliers and stale ticks.

3) Freshness and Latency

  • Check out the distribution of end-to-end egress-to-onchain-verify latency for pull oracles and those speedy low-latency streams.
  • Keep an eye on how data staleness windows are affected when things get stressful or during reorgs.

4) Chain/DA Liveness and Finality

  • We need to keep an eye on finality lag, reorg depth, and how we're progressing through slots and epochs, along with blob inclusion rates and fees.
  • Don't forget about the success rates of DA sampling with light clients and those pesky retrieval error bursts we sometimes see from archival nodes.

Sources and what they imply for monitoring

  • Ethereum Dencun (EIP‑4844) introduced blob transactions that stick around for about 18 days. This is awesome for rollups, but it also means we have to keep an eye on freshness and availability alerts, especially when blob fees jump or retrieval slows down. Make sure you’re tracking the blob gas base fee, inclusion, and L2 posting rhythm along with your usual execution-layer health. (ethereum.org)
  • So, you've got Pull Oracles (like Pyth) and Data Streams (Chainlink) that can give you updates in less than a second--around 400 ms, to be exact--while also making sure everything’s verified on-chain. It's super important to keep an eye on both the off-chain retrieval channel (like Hermes or the Streams API/WebSocket) and the on-chain verification path. To stay on top of data liveness without messing with user flow, set up canaries that check in and verify a report every N blocks. You can find more details in the Pyth documentation.
  • Cross-chain security through Chainlink CCIP offers rate limits and an extra layer of risk management that can pause operations if any weird activity gets spotted. It's important for your monitors to keep an eye out for those “paused” or rate-limited situations and to enforce value caps right at your application's edge. You can find more details on this here.
  • DA layers like Celestia and EigenDA change the game by moving the failure modes from “chain down” to “data not available at the moment.” It’s important to keep an eye on DAS sampling failure rates, any archival retrieval hiccups, and changes in operator status (think things like EigenLayer slashing activation and how they’re opting in). Check out more details in the Celestia docs.
  • L1/L2 halts or slowdowns are still a thing. Just look at Solana's outage on February 6, 2024, which lasted about 5 hours and had them restarting validators. Make sure your playbooks can handle these situations smoothly during cluster restarts and when recovering RPC in a staggered way. (theblock.co)
  • Verifiable Credentials v2.0 officially became a W3C Recommendation in 2025. If you're working with production stacks, it’s a good idea to keep an eye on VC verification failures, issues with fetching revocation bitstring status lists, and any delays in DID document resolution. Check it out here: (w3.org)

SLOs and error budgets tailored for verifiable data

Borrowing the SRE Control Loop: Defining SLOs Beyond Uptime

When we think about Service Level Objectives (SLOs), uptime often steals the spotlight. But there's so much more that goes into a truly resilient service! Let’s explore how we can expand the concept of SLOs to cover various other aspects of service performance.

1. User Experience Matters

Sure, uptime is crucial, but how about how users actually feel when they interact with your service? You might want to consider SLOs related to:

  • Response Time: How fast does your service respond to requests?
  • Error Rate: What's the percentage of failed requests?
  • Throughput: How many requests can you handle per second without breaking a sweat?

2. Reliability Beyond Availability

Availability is just one piece of the puzzle. Think about:

  • Durability: If a user stores their data, how long can they expect it to be safe?
  • Consistency: Is the data they see always accurate and up-to-date?

3. Operational Metrics

Monitoring internal operations can be just as important as user-facing metrics. Consider these SLOs:

  • Incident Response Time: How quickly can your team respond to issues?
  • Change Failure Rate: What percentage of deployments lead to incidents?

4. Quality of Service

Don’t forget to include quality metrics:

  • Performance Under Load: How does your service behave under peak traffic?
  • Service Availability: Not just uptime--are there any known issues during specific times?

5. Customer Satisfaction

At the end of the day, it's all about keeping the users happy. Incorporate SLOs like:

  • Net Promoter Score (NPS): Do users love your service enough to recommend it?
  • Customer Support Response Time: How quickly are user queries handled?

Conclusion

By expanding your SLOs beyond just uptime, you'll get a more holistic view of your service's performance and user satisfaction. This approach not only helps you meet your reliability goals but also strengthens your relationship with your users. So, take a step back and think about the bigger picture for your service!

  • Data Integrity SLO: We aim for 99.999% of verified reports to successfully pass signature/proof checks and schema validation. If anything fails these checks, it gets quarantined from the business logic to keep things safe.
  • Freshness SLO (latency-sensitive): We're shooting for 99.9% of trades to settle with oracle data that's no older than 800 ms (when pulling from oracle/Streams paths) and 99.99% to settle within 2 seconds. For push feeds, we’ll set max staleness windows for each asset and create triggers like “must-refresh-if-volatility>X.” Check out the details here.
  • Correctness SLO: We want 99.95% of windows to show cross-source divergence of ≤ X bps compared to the reference basket. If it goes beyond that, our circuit breakers will kick in automatically.
  • Chain/DA SLO: We’re targeting 99.9% of blob posts to happen within the expected cadence; we also want our DA sampling success rate to be at least 99.99% for the last N blocks. Plus, we’ll make sure the finality lag doesn’t exceed M slots at the 99th percentile. More info can be found here.
  • Cross-chain Safety SLO: We need our rate-limit protections to keep the flow of value in a single interval capped at ≤ VaR_15min, and if something goes awry, the anomaly-pause should propagate within ≤ 2 blocks on every connected chain. For more insights, check out this blog post.

Utilize error budgets to manage how much change you can make if your Service Level Objectives (SLO) start to slip. If you hit your budget limit, it’s time to put a hold on any risky deployments--just like they do over at Google with their SRE policy. Check it out here: (sre.google).


Signal design: what to alert on (and what to log)

Contact a human only when the system really needs that personal touch; for everything else, just go ahead and create tickets or logs.

Page immediately when:

  • The signed report verification doesn't pass for any hot-path asset after two tries, with a bit of jitter.
  • The pull-oracle verify path gives you stale data (over the target) across two different RPCs.
  • You notice that cross-source divergence goes beyond your circuit-breaker limit.
  • You've hit a pause or rate-limit cap in the CCIP risk network for a specific token or channel.
  • The DA sampling failure rate goes above your threshold, or you miss the posting window for blob inclusion two times.
  • Watch out for chain halt signals: a stalled slot or epoch, RPC health is poor for 70% or more of your providers, or if the official status page shows a “major outage.” (theblock.co)

Send tickets (next few days):

  • We’ve been seeing some random single-feed disconnections, but they seem to fix themselves. Also, there are a few intermittent DAS timeouts that are still under the threshold.
  • Just a heads up about SDK deprecations and some changes coming to endpoints (like those Streams feed lifecycle notices). Check it out here: (docs.chain.link)

Log for analysis:

  • Latency histograms for each asset; check verify-gas costs; look into the differences between on-chain verified values and what we see after trade settlements.

Concrete metrics and thresholds we’ve seen work

  • Integrity: We want to keep our verification failure rate under 1e-6 per day, and the rate of errors fetching the VC status list should be less than 0.1% daily. (w3.org)
  • Freshness: Aim for a p99 end-to-end time (that’s including off-chain fetching and on-chain verification) of less than 2 seconds; and for low-latency paths, we’re looking for a p50 of under 300 milliseconds. (docs.chain.link)
  • Divergence: Keep the price difference at the p99 level to less than 4 basis points compared to a composite, though this can be adjusted based on how liquid the asset is.
  • DA Availability: Make sure the data availability sampling success rate hits at least 99.99%, and we want archival retrieval errors to stay under 0.05%. (Check out Celestia/EigenDA for more on this.) (docs.celestia.org)
  • Cross-chain Safety: Ensure that the value of the token channel over 15 minutes doesn’t exceed the set cap, and keep the number of anomaly pause propagation blocks to 2 or less per chain. (blog.chain.link)

Tooling blueprint (reference stack)

  • Collection and tracing: We're using OpenTelemetry along with Prometheus. This includes some custom exporters tailored for clients, oracles, and off-chain services.
  • Visualization: Grafana is our go-to tool, where we've set up panels to keep an eye on integrity, freshness, divergence, and chain finality.
  • Alerting/on-call: We rely on PagerDuty for alert management, which neatly categorizes incident types into playbooks. Plus, we have readiness reports in place to help minimize Mean Time to Acknowledge (MTTA). Check it out here.
  • Logs: We're utilizing Loki/Elastic, with structured fields that include asset_id, chain_id, proof_type, and verify_ms.
  • Chaos and drills: We regularly inject scheduled invariants violation on our staging environment, organize weekly game days, and run synthetic trades using a commit-reveal approach to really test our frontrunning defenses on Data Streams. Get more details here.

Chain- and vendor-specific monitors you should implement

  • Ethereum after Dencun

    • Blob gas base fees and the inclusion rates for each posting job.
    • Issues with KZG commitment verification failures and retries for “blob unavailability,” plus a fallback option to redundant relays.
    • Monitoring the rollup posting frequency against our target; we’ll send out an alert if there are two consecutive intervals that don’t hit. (ethereum.org)
  • Solana

    • We're keeping an eye on slot lag, vote credits, delinquency, and the overall health of the leader schedule; plus, we’re monitoring RPC p95 latency and any 5xx burst alarms.
    • We’re also checking out PoH drift and UDP packet loss to help us anticipate any missed leader slots.
    • Here's our incident runbook for when we hit that “cluster restart” state: it features staggered RPC recovery and a warm-up for dApps. (github.com)
  • Pull Oracles (Pyth)

    • Keep an eye on Hermes endpoint latency and error rates; watch out for updatePriceFeeds reverts on-chain (like StalePrice) when there's a drift in fee estimation.
    • Check the 400 ms update cadence off-chain against on-chain staleness. (docs.pyth.network)
  • Low-latency streams (Chainlink)

    • Check out the Streams API/WebSocket HA mode for health insights, deduplication stats, keeping an eye on gas spikes, and comparing commit-reveal timing with mempool conditions. (docs.chain.link)
  • Cross-chain (Chainlink CCIP)

    • Keep an eye on rate-limit usage for each token and channel; remember the RMN pause state; also, check for any differences between the expected and actual token amounts (which should ideally be zero with mint/burn pools).
    • Make sure to monitor CCIP service limits during soak tests before launching. (docs.chain.link)
  • Data Availability

    • Celestia: Check out the success rate of DAS, the light node sampling window, and the error rates for archival retrieval. Keep an eye on pruning behavior and the recency window too. (docs.celestia.org)
    • EigenDA: Look into the posting throughput and latency, as well as any changes in the operator set. It's also worth watching the EigenLayer slashing and opt-in states since they could affect AVS reliability. (coindesk.com)

Incident taxonomy and first 15 minutes

Make sure your workflows are grounded in NIST SP 800‑61 Rev. 3 (Final, April 2025) and are in sync with CSF 2.0. It’s super important to outline clear severity levels (SEV1-SEV4) and assign owners to each. Check it out here: (csrc.nist.gov)

  • SEV1 Examples

    • A breach caused by cross-source divergence that’s affecting active users.
    • A CCIP RMN pause or hitting the rate-limit cap mid-stream during user flows.
    • A failure in DA posting that happened across two intervals for production rollups.
  • First 15 Minutes Checklist

    • Start by declaring the incident, and get in touch with the primary contact and comms lead; don’t forget to open the war-room.
    • Hit the pause button on any changes (following the error-budget policy), except for those crucial P0 fixes. Time to switch the app to degraded mode--this could be read-only, withdraw-only, a trading pause, or maybe even enable circuit-breakers. (sre.google)
    • Confirm the blast radius: identify the affected assets, chains, and users; make sure to note the last “good” verification height.
    • If this is a cross-chain situation, it’s wise to set conservative per-interval rate limits and queue up any non-critical transfers. (blog.chain.link)
    • Dealing with a DA issue? Try a retry with some backoff time, switch to a secondary posting region, and focus on posting minimal commitments first--especially for those critical channels.
    • If you’re facing a "major outage" on Solana, raise those client RPC timeouts, disable any paths that are sensitive to latency, and retry after you see that cluster restart notice. (theblock.co)
  • Communication

    • If you’re working with regulated EU entities, keep in mind the DORA timelines: you need to send out an initial notification within 4 hours of identifying something as major (or within 24 hours at most once you’re aware of it). Then, you’ll want to follow up with an intermediate report within 72 hours of that initial notification and usually wrap things up with a final report within a month (as per the RTS/ITS guidelines). It’s a good idea to create some templates and set up automation for this process. Check out more info here: (advisera.com)

Playbooks with precise triggers

  1. Price Feed Divergence
  • Trigger: If the absolute difference in basis points (abs_diff_bps) hits or exceeds 5 bps for three straight minutes between primary and secondary sources.
  • Action: Activate the circuit-breaker for the affected markets. Switch to the verified pull path on-demand (commit-reveal) and increase slippage until everything stabilizes again. Don't forget to page the on-call team! (docs.chain.link)
  1. CCIP Anomaly or Rate-Limit Hit
  • Trigger: When we spot an RMN “curse” or a pause, or if rate-limit utilization hits 90% or more for two windows.
  • Action: Shift the app to queue-only mode for cross-chain transfers, bump up the per-tenant caps, let everyone know the status, plan a test to unpause with a dry-run, and double-check those zero-slippage guarantees on the token pools once we’re back in action. (blog.chain.link)

3) Ethereum Blob Posting Backlog

  • Trigger: If there are two missed blob intervals or if the blob base fee is greater than Y for Z minutes.
  • Action:
    • Give priority to critical channels.
    • Compress batches.
    • Failover to the poster.
    • Increase on-chain fee caps.
    • Notify about the risk of delayed settlements.
    • Keep an eye on KZG verification signals. (ethereum.org)

4) Solana Cluster Stall

  • Trigger: Slot progression has come to a standstill for more than 120 seconds, along with some validator coordination messages.
  • Action: Pause all trading and issuance; set states to read-only; share status updates every 30 minutes; only resume operations once we get a confirmation that the v1.17.x+ restart is complete and that the RPC provider is back up and running. (theblock.co)
  1. DA Sampling Degradation (Celestia)
  • Trigger: If the das_failure_rate goes above 0.1% for 10 minutes or if we start seeing archival retrieval errors over 0.5%.
  • Action: Switch reads over to reliable archival peers; boost sampling redundancy; limit dependent features; and kick off P1. (docs.celestia.org)

Staffing and on-call that actually scales

  • We’ve got a follow-the-sun coverage plan, split into two layers: the primary layer is handled by protocol SRE and the secondary layer by our crypto data engineer.
  • For rotations, it's a week on-call for 4-6 engineers, and let's keep after-hours alerts limited to just P0/P1 critical issues.
  • Handover is key! We’ll have a daily 15-minute sync that’s mandatory, plus a written shift report to wrap up any open risks like volatility or chain upgrades.
  • On the automation front, we’ve got some pre-approved runbooks that can be executed right from chat or IDP, complete with audit trails. This includes things like restarting fetchers, adjusting rate limits, and switching RPC clusters. You should definitely check out how PagerDuty's automation works; it can really help cut down MTTR for those recurring issues. (pagerduty.com)

Compliance guardrails you can operationalize

  • ISO 27001:2022 controls: Make sure you align your monitoring and incident response (IR) with the new Annex A items, like threat intelligence, cloud service security, activity monitoring, and secure coding. It’s also key to keep a clear connection from alerts to controls in your Statement of Applicability (SoA). (dqsglobal.com)
  • SOC 2: If you’re selling to enterprise clients, go for a Type 2 audit (which checks your operating effectiveness over time) instead of a Type 1. Remember, your incident timelines, runbooks, and post-incident reviews are crucial for building a solid audit trail. (soc2auditors.org)
  • Chainlink certifications: Whenever you’re using CCIP/Data Feeds, keep in mind that Chainlink Labs holds an ISO 27001 and SOC 2 Type 1 certification. This is good for your vendor due diligence. (chain.link)
  • DORA (EU) starting Jan 17, 2025: Be sure to integrate its 3-stage reporting into your incident response tools and templates. And don’t forget to sync up with your legal and compliance teams on the specific thresholds that apply in your jurisdiction. (eba.europa.eu)

  • Low-latency trading on-chain

    • Use a pull oracle like Pyth or implement Streams commit-reveal to securely tie data to trades. This helps to avoid frontrunning. Keep an eye on the latency metrics, specifically p50/p95, and track gas variance. If the verification time exceeds 1.2 seconds at p95, automatically widen slippage. (docs.pyth.network)
  • Tokenized RWAs with ongoing assurance

    • Combine Proof of Reserve (PoR) or SmartData feeds with protocol-level circuit breakers to hit the brakes on minting or redemptions if the reserve NAV strays more than X% or if the feed fails verification twice. Keep an eye on how fresh the reserves are and how often the audit-chain anchoring happens. (chain.link)
  • Cross-chain distribution at enterprise scale

    • Leverage CCIP’s rate limits and token-developer attestation for burn/mint flows; set up alerts for any missing attestations or if value caps are getting close to their thresholds; make sure to practice RMN-initiated pauses in the staging environment. (chain.link)
  • Rollups with DA optionality

    • When you're posting to EigenDA, keep an eye on the operator sets and, as of April 17, 2025, be aware of slashing activation and the opt-in stance for the AVSs that support your route. Make sure to stay alert for any operator changes that might drop your diversity below the policy limits. (coindesk.com)

Drill cadence and chaos tests

  • Weekly Synthetic Incident: Create a 10 bps price divergence and make sure the circuit-breaker kicks in within one block. Also, check that the liquidity on the reading list stays intact and user communications go out in under five minutes.
  • Monthly DA Chaos: Slow down DA retrieval to mimic archival outages. Make sure to verify the backoff, retries, and that everything switches to read-only degrade mode.
  • Quarterly Cross-Chain Pause: Simulate an RMN pause. Be sure to check that queued transfers, cap raises, and idempotent resumes are all working as expected. (blog.chain.link)

30/60/90-day rollout plan

  • 30 days

    • Take stock of all verification paths (VCs, oracle reports, cross-chain messages).
    • Set up SLOs and initial error budgets, wire up those Prometheus exporters, and get Grafana up and running.
    • Roll out P1 paging for any verification failures and divergences.
  • 60 days

    • Set up DA and blob monitors; implement CCIP rate limits and anomaly-pause alerts.
    • Create three playbooks: divergence, cross-chain pause, and chain halt.
    • Kick off weekly synthetic tests and roll in PagerDuty automation. (pagerduty.com)
  • 90 days

    • Make sure our Incident Response (IR) aligns with NIST SP 800‑61 Rev. 3 and map everything to the Annex A controls of ISO 27001:2022.
    • Get those DORA-ready templates and reporting pipeline set up (if that’s in our scope).
    • Run a red team exercise that focuses on trying to manipulate the oracle and check for any issues with data retrieval failures. (csrc.nist.gov)

Buyer’s checklist for VDS vendors and partners

  • Integrity guarantees: We've got on-chain verifiability, auditability of signing keys, and clear policies for key rotation.
  • Latency and HA: Expect a sub-second median for pull paths, along with well-documented failover processes and report deduplication.
  • Controls: We’re certified with ISO 27001 (2022), and prefer SOC 2 Type 2 where it makes sense. Check it out here: (dqsglobal.com).
  • Cross-chain risk controls: We implement rate limits, anomaly detection, and pause semantics, plus you can find our operational runbooks available for public viewing. Learn more at: (blog.chain.link).
  • DA posture: Keep an eye on our sampling metrics, SLAs for archival nodes, and the diversity of operators, along with a slashing regime if we’re using EigenLayer. You can dive into the details here: (coindesk.com).

Closing thought

Verifiability isn’t just some fancy term in cryptography--it’s a commitment you demonstrate with every passing minute. By setting the right SLOs, using a mix of signals, and sticking to well-rehearsed playbooks, teams can ensure that protocols remain strong even during chain pauses, data availability issues, and cross-chain quirks, all while keeping user trust intact.

Looking to integrate those monitors, error budgets, and drills into your setup? 7Block Labs has got your back! We can help you establish a production-ready VDS operating model in just 90 days. We’ll provide you with customized templates, dashboards, and playbooks designed specifically for your chains, oracles, and compliance needs.

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

Related Posts

Blockchain Services

ByAUJay

7Block's 'Red Teaming' Service: Testing Your Protocol in Real-World Scenarios When it comes to ensuring your security protocols are up to scratch, 7Block's got your back with their 'Red Teaming' service. What does that mean, exactly? Well, think of it as a friendly but rigorous challenge for your system. Their team dives in and mimics the tactics of potential attackers, putting your defenses through their paces in real-world situations. It's all about figuring out where the weaknesses lie, so you can strengthen your security. After all, it's better to know now than to find out the hard way later on. With 7Block on your side, you'll get a thorough assessment that helps you keep your protocol in top shape, ready to tackle whatever comes your way.

7Block’s Red Teaming really puts your protocol through its paces by simulating actual attack scenarios. We're talking about things like rounding errors, sudden jumps in blob fees, AA mempool headaches, and those tricky L2 fault-proof tweaks. By doing this, you can launch your product with confidence, stay on track with your timelines, and safeguard your Total Value Locked (TVL), all while getting solid returns on what you’ve invested. We're here to help you hit your targets!

Blockchain Services

ByAUJay

Choosing Between Fixed Bid and Time & Materials for Your Blockchain Development Services

### Short version: While fixed-bid contracts might seem secure at first glance, the fast-changing cost landscape of Ethereum--thanks to things like EIP‑4844 blobs and OP Stack fault proofs--along with compliance requirements like SOC 2, ISO 27001, and NIST 800‑53, can introduce unexpected complexities that inflate those “all-in” estimates.

Blockchain Services

ByAUJay

Finding the Right Time for Hyperledger Blockchain Development Services

description: Hyperledger isn't just one thing; it's like a toolbox full of options. This guide walks you through when to use Hyperledger Fabric, Besu, FireFly, Cacti, Bevel, and Indy/AnonCreds. You'll get the scoop on what each one excels at in 2025, how they're changing, and tips on how to leverage them effectively.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.