7Block Labs
Blockchain Technology

ByAUJay

Best Solana RPC Services Geo-Distributed and X402-Fetch GitHub: Designing High-Availability RPC

Summary: If Solana is your execution layer, your RPC is your SLO. This guide shows decision‑makers exactly which geo‑distributed Solana RPCs deliver in 2026 and how to design a multi‑provider, SWQoS‑aware, x402‑pay‑per‑call architecture that keeps reads fast, writes landing, and dashboards streaming—even during network stress.


Why this matters now

A great Solana product can still feel slow or flaky if your RPC layer isn’t engineered for the realities of the network: global users, spiky traffic, SWQoS‑gated write lanes, and heavy archival reads. Public endpoints are explicitly not for production and carry strict rate limits; production apps should rely on dedicated/private RPC with clear failure domains and documented SLAs. (solana.com)

Below is a current, concrete view of the best geo‑distributed Solana RPC platforms and a reference design you can implement this quarter.


The 2026 landscape: who actually runs fast, globally distributed Solana RPC?

When you evaluate providers, look for three things: global presence with real regional choice, SWQoS/staked write paths, and specialized data/streaming services.

  • Helius

    • What’s unique: Sender write pipeline with 7 regional submitters and parallel routing to Jito + Helius; staked connections by default; LaserStream gRPC with regional endpoints (FRA, AMS, TYO, SG, LAX, LON, EWR, PITT, SLC); SOC‑2 posture and 99.99% RPC success claims. (helius.dev)
    • Why it matters: True region selection for both reads and landing‑sensitive writes—and explicit SWQoS via staked connections.
  • Triton One

    • What’s unique: Dedicated nodes in NA/EU/APAC; geo‑DNS and auto‑failover from local to global backup pools; optimized indexes for getProgramAccounts (gPA); Geyser‑fed gRPC streaming; full historical ledger access; “Solana Historical RPC” as a priced add‑on; productized stack (“Project Yellowstone”: Dragon’s Mouth for gRPC, Steamboat for fast gPA, Whirligig for WebSockets). (triton.one)
    • Why it matters: Read‑heavy apps (wallets, explorers) and teams that want turnkey historical and streaming without rolling their own indexers.
  • QuickNode

    • What’s unique: Anycast/geo routing to the nearest location by default; recently published p95 latency benchmarks claiming top global performance; optional MEV‑protection & recovery for Solana sends. (quicknode.com)
    • Why it matters: If your KPI is raw p95 across regions, this is a credible baseline to test against (run your own benchmarks too).
  • Alchemy

    • What’s unique: Purpose‑built Solana stack with claims of 2x higher throughput, 99.99% reliability, multi‑region failover (3–5 layers), and fast gRPC streaming. (alchemy.com)
    • Why it matters: Teams standardizing on Alchemy cross‑chain now get first‑class Solana performance features.
  • Syndica

    • What’s unique: “Strategically geo‑located” Solana RPC, 99.99% uptime SLA available, plus ChainStream/mission‑critical APIs. (syndica.io)
    • Why it matters: A focused Solana shop with enterprise engagement patterns (white‑glove Slack/TG).
  • GetBlock

    • What’s unique: Region‑selectable shared Solana endpoints (Frankfurt, New York, Singapore) to cut latency in EU/US/APAC; MEV‑protected JSON‑RPC endpoints available; region selection in dashboard. (getblock.io)
    • Why it matters: Simple region pinning without jumping to dedicated nodes.
  • dRPC

    • What’s unique: Decentralized RPC fabric; native SWQoS support for Solana and MEV protection on premium endpoints. (drpc.org)
    • Why it matters: For teams prioritizing decentralization with enterprise features (analytics, SLAs).
  • ERPC (Validators DAO)

    • What’s unique: SWQoS endpoints (starting FRA) to access staked lanes; network path optimizations across regions; detailed docs on SWQoS economics and options (e.g., elSOL). (erpc.global)
    • Why it matters: If you need deterministic transaction inclusion during bursts, SWQoS access is a key primitive.
  • Chainstack, BlockPI, others

    • Chainstack offers dedicated/elastic nodes with global deployments; BlockPI/DRPC show open‑source and decentralized routing efforts. (chainstack.com)

Tip: Always run your own benchmark harness hitting the exact RPC methods you’ll call in production (gPA, getMultipleAccounts, getLatestBlockhash + sendTransaction with different commitments), and measure p50/p95/p99 plus error codes.


Primer: SWQoS is table stakes for serious Solana sends

Stake‑Weighted QoS (SWQoS) prioritizes QUIC connections to leaders for staked peers; roughly 80% of leader TPU capacity can be reserved for staked connections, leaving ~20% for non‑staked. Proper setups pair a trusted RPC with a staked validator (or use a provider that does this for you) so your sends reliably reach leaders even during congestion. Configuration involves a staked‑nodes override on the validator and explicit TPU peering on the RPC. (solana.com)

Providers like Helius, dRPC, ERPC, and others expose SWQoS‑backed write paths as a managed service so you don’t need to maintain validator/RPC pairing yourself. (helius.dev)


A pragmatic, high‑availability (HA) reference architecture

Design for independent failure domains and explicit intent: reads, writes, and streams each need separate strategies.

  1. Reads (HTTP JSON‑RPC)
  • Active‑active across at least two providers in different regions that match your user distribution (e.g., LAX + FRA).
  • Latency‑aware, circuit‑breaking HTTP client with per‑method budgets (e.g., stricter for getLatestBlockhash than getBlock).
  • Prefer batched calls and selective fields (e.g., getMultipleAccounts, gPA with filters) to reduce round‑trips; use providers with gPA acceleration/indexes (Triton Steamboat). (triton.one)
  • Enforce minContextSlot in critical reads to avoid time‑travel when failing over between providers that are a few slots apart. (docs.solanatracker.io)
  1. Streams (WebSocket/gRPC)
  • Dual‑subscribe to two independent streams (e.g., Helius LaserStream + a second provider WebSocket) and reconcile by slot.
  • Use replayable streams with persistence guarantees on provider side; Helius advertises historical replay and redundancy for LaserStream. (helius.dev)
  1. Writes (sendTransaction)
  • Primary: SWQoS‑backed sender (Helius Sender, ERPC SWQoS, dRPC premium). Secondary: a separate provider with staked routing in a different region. (helius.dev)
  • Control retries yourself with sendTransaction maxRetries, simulateTransaction pre-checks, and minContextSlot. (solana.com)
  • For sensitive users, add MEV‑aware routing (e.g., QuickNode Solana MEV protection add‑on). (quicknode.com)
  1. Public endpoints as stopgaps only
  • Solana Foundation public RPCs carry strict per‑IP and per‑method limits; they should not be in your steady‑state path. Use them for diagnostics only. (solana.com)

Concrete implementation patterns

1) Latency‑ and health‑aware RPC pool (Node.js)

// npm i node-fetch abort-controller p-retry p-queue
import fetch from "node-fetch";
import { AbortController } from "abort-controller";

type Rpc = { name: string; url: string; hardTimeoutMs: number };

const RPCS: Rpc[] = [
  { name: "helius", url: process.env.HELIUS_URL!, hardTimeoutMs: 600 },
  { name: "triton", url: process.env.TRITON_URL!, hardTimeoutMs: 600 },
];

async function jsonRpcCall(rpc: Rpc, method: string, params: any[]) {
  const controller = new AbortController();
  const t = setTimeout(() => controller.abort(), rpc.hardTimeoutMs);

  const body = { jsonrpc: "2.0", id: Date.now(), method, params };
  const res = await fetch(rpc.url, {
    method: "POST",
    headers: { "content-type": "application/json", connection: "keep-alive" },
    body: JSON.stringify(body),
    signal: controller.signal,
  }).finally(() => clearTimeout(t));

  if (!res.ok) throw new Error(`HTTP ${res.status} from ${rpc.name}`);
  return res.json();
}

export async function getLatestBlockhashHA() {
  const params = [{ commitment: "processed" }];
  // Race two providers; first healthy response wins.
  const promises = RPCS.map((r) => jsonRpcCall(r, "getLatestBlockhash", params));
  return Promise.any(promises); // requires Node 16+
}

Key detail: include minContextSlot for state‑sensitive reads when failing over, so you never accept data from behind your last known slot. (docs.solanatracker.io)

2) Deterministic send logic with your own retry policy

import { Connection, sendAndConfirmRawTransaction } from "@solana/web3.js";

// Use SWQoS-backed primary sender; region-near your users.
const primary = new Connection(process.env.HELIUS_SENDER_URL!, { commitment: "confirmed" });
// Secondary in a different trust domain (provider + region)
const secondary = new Connection(process.env.ERPC_SWQOS_URL!, { commitment: "confirmed" });

async function sendWithPolicy(serializedTx: Buffer) {
  // Preflight simulation on primary provider
  // If simulation fails, bail early; if it passes, attempt primary send with limited retries.
  const sim = await primary.simulateTransaction(serializedTx);
  if (sim.value.err) throw new Error(`preflight failed: ${JSON.stringify(sim.value.err)}`);

  try {
    // Let the node handle a bounded retry count; don't block on long leader rotations.
    const sig = await primary.sendRawTransaction(serializedTx, {
      skipPreflight: true,
      maxRetries: 3,
      preflightCommitment: "processed",
    });
    return sig;
  } catch (e) {
    // Secondary SWQoS lane
    return secondary.sendRawTransaction(serializedTx, {
      skipPreflight: true,
      maxRetries: 3,
      preflightCommitment: "processed",
    });
  }
}

maxRetries gives you control over node‑side rebroadcast so your app can implement its own escalation path. (solana.com)

3) Stream redundancy with slot reconciliation

  • Subscribe to accounts/logs on two streams (e.g., Helius LaserStream gRPC and another provider WebSocket).
  • Maintain “latestFinalizedSlot” per stream; only emit to your app layer after both streams are ≥ the slot of the event. Helius advertises historical replay and redundant clusters to avoid missed data. (helius.dev)

Emerging best practices you should adopt now

  1. Get onto staked lanes for critical sends
  • Use providers that forward via staked validators and expose SWQoS; otherwise your traffic competes for the 20% non‑staked lane under load. (solana.com)
  1. Pin regions explicitly for latency
  • If your users are in APAC, pin to Singapore/Tokyo; GetBlock and Helius expose region‑specific infrastructure, not just a global anycast. (getblock.io)
  1. Use archival shortcuts instead of brute force
  • Helius’s getTransactionsForAddress collapses gSFA + getTransaction into one filtered, paginated call; their rollout shows large P99 improvements for historical queries. Use it when available instead of ad‑hoc loops. (helius.dev)
  1. Indexer‑grade gPA and getMultipleAccounts
  • Providers like Triton ship custom indexing to accelerate gPA; for your app, prefer tight memcmp filters and limit parameters. (triton.one)
  1. MEV‑aware user flows
  • For DEX/UI sends, consider MEV protection when your provider offers it (e.g., QuickNode add‑on). (quicknode.com)
  1. Public endpoints only as a backstop
  • They are rate‑limited and subject to blocks; respect Retry‑After and treat a 403/429 as a signal to fail over to private endpoints. (solana.com)

Using x402‑fetch for pay‑per‑call RPC and burst capacity

x402 is an emerging pattern that turns HTTP 402 Payment Required into an on‑chain, per‑request payment flow: the server responds 402 with payment requirements; the client signs a payment and retries automatically. Thirdweb ships client and React wrappers; community packages exist for TS/Node, Python, and Rust. (portal.thirdweb.com)

Why this matters for RPC:

  • If a decentralized RPC network or premium lane gates heavy calls (e.g., archival scans, SWQoS sends) behind metered access, x402 lets you elastically burst without pre‑provisioning capacity.
  • The client code wraps fetch and keeps your business logic unchanged. Note: 402 is not standardized for payments in HTTP specs; adoption is ecosystem‑driven, so feature‑detect gracefully. (developer.mozilla.org)

Example: wrap fetch with x402 and set a ceiling on what you’ll pay per call.

// npm i x402-fetch viem
import { wrapFetchWithPayment } from "x402-fetch";
import { createWalletClient, http } from "viem";
import { privateKeyToAccount } from "viem/accounts";
import { base } from "viem/chains"; // e.g., USDC on Base per provider requirements

// Wallet used for per-request payments
const account = privateKeyToAccount(process.env.PK!);
const wallet = createWalletClient({ account, transport: http(), chain: base });

// Wrap global fetch
const fetchWithPay = wrapFetchWithPayment(fetch as any, wallet, /*maxValue=*/ BigInt(1_00_000)); // 1 USDC in 6dp

// Call a paid endpoint (e.g., premium archival or SWQoS gateway)
const res = await fetchWithPay(process.env.PAID_RPC_URL!, {
  method: "POST",
  headers: { "content-type": "application/json" },
  body: JSON.stringify({
    jsonrpc: "2.0",
    id: 1,
    method: "getTransactionsForAddress",
    params: ["<address>", { limit: 100, order: "desc" }],
  }),
});
const data = await res.json();

Thirdweb also offers an HTTP proxy that auto‑pays 402 based on your wallet config, reducing client complexity for frontends. (portal.thirdweb.com)

Operational notes:

  • Budget guardrails: set maxValue per request; alert on cumulative spend.
  • Observability: log the presence of 402 challenges and resulting on‑chain tx hashes from the facilitator client.
  • Graceful fallback: if the provider doesn’t speak x402, fall back to your flat‑rate provider.

Some Solana‑native decentralized RPC initiatives (e.g., GenesysGo’s Shadow RPC/Premium) have introduced USDC‑based payments and operator reward flows; x402 gives you a standard client path to consume such paid services programmatically as they adopt 402 challenges. (chaincatcher.com)


Putting it together: a geo‑distributed, SWQoS‑aware, x402‑enabled blueprint

  • Regions
    • NA West (LAX), NA East (EWR/NYC), EU (FRA/AMS), APAC (SG/TYO). Choose two primaries aligned with your largest user clusters (e.g., LAX + FRA).
  • Providers
    • Reads: Triton + QuickNode active‑active; pin users to nearest region via provider geo‑DNS and your own latency checks. (triton.one)
    • Streams: Helius LaserStream gRPC + backup WebSocket from another provider. (helius.dev)
    • Writes: Helius Sender primary (staked connections, multi‑region) with ERPC SWQoS as secondary in a different region. (helius.dev)
  • Burst economics
    • For heavy archival/backfills or NFT‑scale drops, route specific methods to a paid x402 endpoint when you hit internal quotas, otherwise use flat‑rate plans. (portal.thirdweb.com)
  • Controls
    • minContextSlot on cross‑provider reads; per‑method timeouts (e.g., 300 ms budget for getLatestBlockhash, 2 s for gPA).
    • Idempotency on writes: your signature is the id; never re‑sign the same intent; use getSignatureStatuses to reconcile.
  • Observability
    • Emit per‑provider p50/p95/p99, 5xx/429/403 rates, and SWQoS lane utilization.
    • Alert on drift between streams (slot delta > N), and on anomalous 402 payment rates.

Practical, precise configuration details that move the needle

  • Use minContextSlot on getLatestBlockhash/getAccountInfo from a secondary provider so you never accept stale state during failover. (docs.solanatracker.io)
  • For sendTransaction:
    • Preflight on your primary only; then send with skipPreflight:true + bounded maxRetries to cap tail latency before escalation. (solana.com)
    • For SDKs that hide details, ensure you can pass preflightCommitment and maxRetries directly.
  • For SWQoS on self‑operated stacks:
    • Validator: configure staked‑nodes‑overrides with identity→lamports map.
    • RPC: set --rpc-send-transaction-tpu-peer to forward via your paired validator. In practice, most teams should consume a managed SWQoS service instead. (solana.com)
  • For archival and history:
    • Prefer provider‑specific consolidated endpoints (e.g., getTransactionsForAddress) instead of block‑scans; cost and latency collapse by an order of magnitude in many cases. (helius.dev)
  • For global coverage:
    • If you need APAC speed, ensure your provider actually has APAC Solana clusters (e.g., Helius SG/TYO; GetBlock Singapore). (helius.dev)

How Firedancer’s rollout affects your RPC strategy

Firedancer introduces a second high‑performance validator client, reducing single‑client risk and improving throughput ceilings. Multiple reports indicate Firedancer went live on mainnet in December 2025 on a limited validator set, after producing tens of thousands of blocks in testing—a step toward a multi‑client Solana. Treat this as an upside to your 2026 capacity planning, not a reason to loosen RPC HA discipline. (coinpedia.org)


KPIs and SLOs we recommend to stakeholders

  • Read API SLOs
    • p95 < 120 ms for getLatestBlockhash; p95 < 250 ms for getProgramAccounts with filters near hot programs; error rate < 0.2%.
  • Stream SLOs
    • Missed slots < 0.01%; slot delta between dual streams < 1 under normal load; recovery < 3 s after provider failover.
  • Write SLOs
    • 99% of wallet sends land within 2 leader rotations under typical congestion using SWQoS lanes; end‑to‑end user confirmation callback < 2 s p95.
  • Cost guardrails
    • x402 bursts ≤ 10% of daily request volume; cap per‑request spend and alert on anomalies.

Fast provider short‑list and what to trial first

  • Trading/searchers: Helius Sender + LaserStream; ERPC SWQoS secondary; QuickNode with MEV protection as tertiary for mixed workloads. (helius.dev)
  • Consumer wallets/explorers: Triton for read performance + historical RPC; Helius for history APIs; GetBlock region pinning if APAC user share is high. (triton.one)
  • Enterprise cross‑chain teams: Alchemy for unified ops + Solana‑specific throughput/streaming; Syndica for uptime SLAs. (alchemy.com)
  • Decentralization‑minded: dRPC or GenesysGo Shadow RPC Premium, with x402 client readiness for pay‑per‑call lanes. (drpc.org)

Final checklist (print this)

  • Map users to regions; pin endpoints, don’t just “global default.”
  • Choose two read providers and one stream provider with replay; add a second stream as backup.
  • Put your writes on SWQoS lanes and control retry behavior.
  • Add minContextSlot and per‑method time budgets.
  • Stand up x402‑fetch for metered endpoints so bursts don’t require provisioning.
  • Benchmark weekly; track p95/p99 and cost per 1M calls by method.
  • Run a game day once per month: kill a region, saturate a provider, observe SLOs.

If you adopt the architecture and providers above, you’ll deliver the Solana UX your users expect: sub‑second confirmations, fast state reads, and dashboards that never skip a beat—even when the chain is busiest.

Like what you're reading? Let's build together.

Get a free 30‑minute consultation with our engineering team.

Related Posts

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2025 7BlockLabs. All rights reserved.