7Block Labs
Blockchain Technology

ByAUJay

Best Solana RPC Services Geo-Distributed and X402-Fetch GitHub: Designing High-Availability RPC


Why this matters now

If you’ve got a solid Solana product but it’s feeling a bit slow or unreliable, it might be because your RPC layer isn’t quite set up to handle some of the network’s quirks. You know, things like having users from around the world, traffic that varies a lot, those SWQoS-gated write lanes, and a ton of archival reads. So, it’s worth taking a closer look at how your setup is working with those factors! Just a heads up, public endpoints aren’t really designed for production use. They also have some pretty strict rate limits, so keep that in mind! If you're working on production apps, it's a good idea to go with dedicated or private RPC options. These typically come with clear failure domains and documented service level agreements (SLAs), which can really help you avoid any unexpected hiccups down the line. (solana.com).

Check out this cool overview of some of the best geo-distributed Solana RPC platforms currently available. Plus, I’m throwing in a reference design that you can dig into this quarter. Happy exploring!


The 2026 landscape: who actually runs fast, globally distributed Solana RPC?

When you’re looking at different providers, there are three important things to keep in mind. First off, you want to see a solid global presence that actually offers options in different regions. Second, check out their SWQoS and staked write paths--those can make a big difference. And lastly, look for some interesting specialized data or streaming services that really stand out.

  • Helius What's special about Helius is that it has this awesome sender write pipeline. It features seven regional submitters, which is pretty neat, and it can send things out in parallel to both Jito and Helius. Super handy, right? So, right out of the box, it comes with staked connections. It’s powered by LaserStream gRPC, and you’ll find regional endpoints in a bunch of spots like Frankfurt, Amsterdam, Tokyo, Singapore, Los Angeles, London, Newark, Pittsburgh, and Salt Lake City. They've really got their SOC-2 game on point, and they're proud to say they have a 99% rating. 99% RPC success rate. Take a look at it over at helius.dev! You won't want to miss it!
  • Why it’s important: This setup really helps with selecting regions effectively, which makes reading and landing-sensitive writing a lot smoother and more efficient. Plus, it provides clear software quality of service (SWQoS) through those staked connections.
  • Triton One
  • What’s cool about it: It has dedicated nodes all over North America, Europe, and the Asia-Pacific region, which is pretty awesome! Plus, it comes packed with neat features like geo-DNS and auto-failover. This means if something goes wrong locally, it smoothly switches to global backup pools without missing a beat. They've really fine-tuned the indexes just for getProgramAccounts (gPA), so you'll be able to take advantage of Geyser-fed gRPC streaming. Plus, you can check out the entire historical ledger whenever you want. It's pretty cool! Oh, and guess what? There’s also a special premium option available for the “Solana Historical RPC,” but it comes with an extra cost. They're launching a whole new product suite called “Project Yellowstone.” It’s got some cool stuff in it: Dragon’s Mouth for smooth gRPC communication, Steamboat for super speedy gPA queries, and Whirligig for handling WebSockets. It sounds pretty exciting! Take a look at this link! You can find it here: triton.one.
  • Why it’s important: This is a total game-changer for apps that get a lot of reads, such as wallets and explorers. It’s perfect for teams that want a no-fuss way to handle historical data and streaming, without the hassle of creating their own indexers from scratch.
  • QuickNode
  • What’s cool about them: They automatically provide Anycast and geo routing, so you’ll always connect to the closest location. They just released some p95 latency benchmarks that really show off their impressive global performance. Oh, and you also get the option for MEV protection and recovery when sending on Solana. Pretty handy, right? (quicknode.com).
  • Why it matters: If you’re looking at raw p95 metrics across various regions, this is a great benchmark to use for comparing your results. Just remember, it’s also super important to run your own tests to get a complete picture!
  • Alchemy
  • What makes it special: This is a Solana stack designed specifically for its purpose, offering double the throughput and an impressive 99% efficiency. You can count on 99% reliability, plus we've got multi-region failover with three to five layers for that extra safety net. And let’s not forget about the quick gRPC streaming--it really keeps things moving! Take a look at it over at alchemy.com. You might find it interesting! So, here’s the deal: if your team is using Alchemy for cross-chain stuff, you’re in for a treat! You'll get some really awesome performance features specifically for Solana. How great is that?
  • Syndica
  • What sets them apart: They provide "strategically geo-located" Solana RPC, and you can rely on a solid 99%. They offer a solid 99% uptime SLA, plus their ChainStream and crucial APIs are top-notch. Take a look at this: (syndica.io). You might find something interesting!
  • Why it matters: This team is all about Solana and has a really effective way of collaborating with businesses. They offer personalized support, so you can reach out to them on platforms like Slack and Telegram whenever you need help.
  • GetBlock
    What's cool about them is that they provide shared Solana endpoints you can select based on where you are. You can choose from Frankfurt, New York, or Singapore! This setup really helps minimize latency for folks across the EU, US, and APAC. Pretty handy, right? Plus, you’ve got MEV-protected JSON-RPC endpoints ready to use, and it’s super easy to pick your favorite region right from the dashboard. (getblock.io).
  • Why it matters: This lets you easily pin down your region without the fuss of having to switch to dedicated nodes. Super convenient, right?
  • dRPC
  • What makes it stand out: It’s a decentralized RPC fabric that not only supports native SWQoS for Solana but also offers MEV protection on its premium endpoints. Pretty cool, right? Check it out here.
  • Why it matters: This is super important for teams that are all about decentralization but still want strong enterprise features like analytics and service level agreements (SLAs). It’s the best of both worlds!
  • ERPC (Validators DAO)
  • What makes it special: They offer these really cool SWQoS endpoints (starting with FRA) that give you a chance to access staked lanes. On top of that, they’re busy fine-tuning the network routes in various areas. Oh, and make sure you check out their in-depth docs on SWQoS economics and options, including elSOL! They really break it all down for you. Take a look at this: (erpc.global). I think you'll find it pretty interesting!
  • Why it matters: If you're trying to ensure your transactions go through smoothly when things get hectic, having access to SWQoS can really make a huge difference.
  • So, you’ve got Chainstack, BlockPI, and a bunch of others too! Chainstack offers dedicated and flexible nodes that are spread out all over the globe. On the other hand, BlockPI and DRPC are all about their open-source projects and decentralized routing efforts. (chainstack.com).

Here's a handy tip: Don't forget to do your own benchmark tests! Focus on the specific RPC methods you'll actually be using in production, like gPA, getMultipleAccounts, getLatestBlockhash, and sendTransaction. Oh, and try out different commitment levels while you're at it. It’ll give you a clearer picture of how everything performs! Hey, just a quick reminder to keep an eye on those p50, p95, and p99 metrics. And don’t forget to track any error codes you come across, too! It’s all super important.


Primer: SWQoS is table stakes for serious Solana sends

Stake-Weighted QoS (SWQoS)

Stake-Weighted QoS, or SWQoS for short, prioritizes QUIC connections from peers who have staked their tokens when it comes to selecting leaders. So, here's the deal: about 80% of the TPU capacity is reserved for those staked connections, leaving around 20% available for the non-staked ones.

To get everything set up properly, you should connect a trustworthy RPC with a staked validator. If that sounds like a bit much, you can always find a provider who has it all figured out for you! This makes sure your transactions reach the leaders, even if the network gets a bit crowded. Setting things up means you’ll need to change the staked nodes on the validator and make sure you properly set up TPU peering on the RPC. If you want to dive deeper into this topic, check out the details here. It’s a great resource!

There are a few providers out there, like Helius, dRPC, and ERPC, that offer managed services with SWQoS-backed write paths. It's pretty cool how they make things smoother for users! So, you won't need to stress about managing the validator/RPC pairing all by yourself.
Be sure to swing by helius.dev and check it out! You won’t regret it!


A pragmatic, high‑availability (HA) reference architecture

Design for Independent Failure Domains and Explicit Intent

When you're handling your data, it's super important to remember that reads, writes, and streams aren’t all created equal. Each one has its own quirks and needs, so it’s best to treat them differently. Every operation has its own unique quirks and requirements, so it’s super important to create personalized strategies for each one.

Separate Strategies

Reads

When it comes to reads, it's all about making sure you can get to your information quickly and that it's rock-solid reliable.
When you're working on your projects, it's really important to consider how caching works and how you can tweak your data structures. The goal here is simple: you want to make sure users get their queries answered as quickly as they can. So, dive into caching strategies and play around with your data setups to speed things up for everyone!

Writes

Writing can get a little tricky sometimes. It’s really important to keep durability and consistency in mind. Having solid transaction handling and replication strategies in place is a great way to keep your data intact, even when things go a bit off track.

Streams

When we think about streams, it really boils down to handling data in real-time and making sure it can grow with your needs. Getting a solid event-driven architecture in place is a great way to manage those heavy data flows without a hitch.

If you come up with different strategies for each of these areas, you’ll end up with a stronger system that can deal with failures more smoothly and really meets what users are looking for. It's all about making things work better for everyone!

1) Reads (HTTP JSON‑RPC)

  • Just a heads up: It’s really important to have active-active setups with at least two providers in different regions. Think about where your users are based. For example, you might want to pair something like LAX with FRA. Try using an HTTP client that's smart about latency and can break circuits when needed. You know, one that can set different spending limits for each method. For example, you might want to be a bit more conservative with getLatestBlockhash than with getBlock. It could really help manage performance better! When you're working with data, it's really smart to use batched calls and pick specific fields--like getMultipleAccounts or gPA with some filters. This way, you can minimize those back-and-forth trips, which makes things run a lot smoother! Also, don't forget to check out providers that offer gPA acceleration and indexes, such as Triton Steamboat. Check it out here. When you're tackling those critical reads, make sure to set minContextSlot. This helps you avoid any time-travel headaches, especially if you need to switch between providers that are only a few slots away from each other. If you're looking for more details, just check this out here. You’ll find everything you need!
  1. Streams (WebSocket/gRPC) You can totally subscribe to two different streams at the same time, like using Helius LaserStream alongside another provider’s WebSocket. Just line them up by slot to keep everything organized! When you're looking for streams that you can replay, it's a good idea to choose options that have strong reliability from the provider. For example, Helius has got your back with historical replay and redundancy for their LaserStream service, which really helps ensure you won’t miss anything important. (helius.dev).

3) Writes (sendTransaction)

  • Main Choice: Go for a SWQoS-backed sender, like Helius Sender, ERPC SWQoS, or dRPC premium. It's a solid pick! Secondary: Have a look at another provider that offers staked routing in a different area. It might be worth exploring! Take a look at helius.dev! You might find something you really like there.
  • Take control of those retries! You can do this by tweaking the sendTransaction maxRetries, running simulateTransaction checks beforehand, and setting your minContextSlot. For more information, just check out solana.com. They've got all the details you need! If you have specific needs, keep in mind that adding MEV-aware routing can really make a difference. Check out the QuickNode Solana MEV protection add-on--it could be just what you need! If you’re looking for more details, you can check it out here: quicknode.com. It's got all the info you need!
  1. Public Endpoints as Short-Term Fixes. Just a heads up, the public RPCs from the Solana Foundation have some pretty strict limits on how much you can use them, both per IP address and per method. Think of them more like a temporary fix for diagnostic purposes rather than a permanent solution. If you want to dive deeper, take a look at the guide over on solana.com. It's packed with all the info you need!

Concrete implementation patterns

1) Latency‑ and health‑aware RPC pool (Node.js)

// npm i node-fetch abort-controller p-retry p-queue
import fetch from "node-fetch";
import { AbortController } from "abort-controller";

type Rpc = { name: string; url: string; hardTimeoutMs: number };

const RPCS: Rpc[] = [
  { name: "helius", url: process.env.HELIUS_URL!, hardTimeoutMs: 600 },
  { name: "triton", url: process.env.TRITON_URL!, hardTimeoutMs: 600 },
];

async function jsonRpcCall(rpc: Rpc, method: string, params: any[]) {
  const controller = new AbortController();
  const t = setTimeout(() => controller.abort(), rpc.hardTimeoutMs);

  const body = { jsonrpc: "2.0", id: Date.now(), method, params };
  const res = await fetch(rpc.url, {
    method: "POST",
    headers: { "content-type": "application/json", connection: "keep-alive" },
    body: JSON.stringify(body),
    signal: controller.signal,
  }).finally(() => clearTimeout(t));

  if (!res.ok) throw new Error(`HTTP ${res.status} from ${rpc.name}`);
  return res.json();
}

export async function getLatestBlockhashHA() {
  const params = [{ commitment: "processed" }];
  // Race two providers; first healthy response wins.
  const promises = RPCS.map((r) => jsonRpcCall(r, "getLatestBlockhash", params));
  return Promise.any(promises); // requires Node 16+
}

Quick Tip: When you're dealing with state-sensitive reads during failovers, don't forget to include minContextSlot. It's an important detail! That way, you won't accidentally end up taking in any data that’s older than your last update. Hey! If you want to dive into the details, just click here. Happy exploring!

2) Deterministic send logic with your own retry policy

import { Connection, sendAndConfirmRawTransaction } from "@solana/web3.js";

// Use SWQoS-backed primary sender; region-near your users.
const primary = new Connection(process.env.HELIUS_SENDER_URL!, { commitment: "confirmed" });
// Secondary in a different trust domain (provider + region)
const secondary = new Connection(process.env.ERPC_SWQOS_URL!, { commitment: "confirmed" });

async function sendWithPolicy(serializedTx: Buffer) {
  // Preflight simulation on primary provider
  // If simulation fails, bail early; if it passes, attempt primary send with limited retries.
  const sim = await primary.simulateTransaction(serializedTx);
  if (sim.value.err) throw new Error(`preflight failed: ${JSON.stringify(sim.value.err)}`);

  try {
    // Let the node handle a bounded retry count; don't block on long leader rotations.
    const sig = await primary.sendRawTransaction(serializedTx, {
      skipPreflight: true,
      maxRetries: 3,
      preflightCommitment: "processed",
    });
    return sig;
  } catch (e) {
    // Secondary SWQoS lane
    return secondary.sendRawTransaction(serializedTx, {
      skipPreflight: true,
      maxRetries: 3,
      preflightCommitment: "processed",
    });
  }
}

With maxRetries, you get to call the shots when it comes to rebroadcasting on the node side. This gives your app the flexibility to come up with its own plan for handling escalations. If you’re curious and want to dive deeper, check it out here. It's worth a look!

3) Stream redundancy with slot reconciliation

  • Keep an eye on updates from two separate streams, like Helius LaserStream gRPC and another provider's WebSocket.
  • Make sure to keep an eye on the “latestFinalizedSlot” for every stream. Only send the info to your app when both streams are all caught up to the event's slot. Helius has got your back with its historical replay feature and backup clusters, so you can rest easy knowing you won’t miss out on any important data. (helius.dev).

Emerging best practices you should adopt now

  1. Go ahead and hop into the staked lanes for those crucial sends.
  • Look for providers that send your traffic through staked validators and provide some solid Software Quality of Service (SWQoS). If that's not the case, then your data is going to have to compete for room in the 20% non-staked lane when things get hectic. (solana.com).

2) Pin Regions Explicitly for Latency

If you've got users in the APAC area, it’s a good idea to pin your services to either Singapore or Tokyo. Services like GetBlock and Helius offer infrastructure that's tailored to specific regions, which is way more effective than a standard global anycast setup. Take a look at this: (getblock.io). You might find it interesting!

3) Use archival shortcuts instead of brute force

Helius has this awesome function called getTransactionsForAddress. It smartly merges gSFA and getTransaction into a single, streamlined call. Plus, it’s super handy because it filters and paginates the results, making it really efficient! Since they rolled it out, there have been some pretty impressive improvements at the 99th percentile when it comes to handling historical queries. When it’s available, really go for it instead of relying on those random loops. Take a look at this: (helius.dev). You might find it interesting!

4) Indexer-grade gPA and getMultipleAccounts

Providers like Triton are stepping it up by offering custom indexing to help speed up gPA. When you're working on your application, it's a good idea to stick with tight memcmp filters and keep those parameters nice and limited. It'll really help optimize your performance! You should definitely take a look at triton.one. It's worth your time!

5) MEV‑aware user flows

When you're handling DEX/UI transactions, make sure to look for any MEV protection options that your provider might offer. It’s a smart move to keep your transactions secure! For instance, QuickNode has a really useful add-on for this! Take a look at this link: (quicknode.com). It's got some pretty useful info!

  1. Treat public endpoints like a safety net. They're there for backup, but it’s best not to rely on them too much! Just a heads up--these things have rate limits, so they can get blocked if you’re not careful! Just a heads up! When you see the Retry-After header, make sure to pay attention to it. And if you ever run into a 403 or 429 error, that’s your signal to switch things up and use the private endpoints instead. (solana.com).

Using x402‑fetch for pay‑per‑call RPC and burst capacity

So, check this out! x402 is this really cool new pattern that takes the whole HTTP 402 Payment Required status and turns it into a seamless, on-chain payment experience for every request. It's pretty game-changing! So, here's the deal: when the server sends back a 402 status, it basically lets you know what kind of payment is required. After that, the client just jumps right in and authorizes the payment before giving it another shot. Thirdweb makes it super easy to integrate with its client and React wrappers. Plus, if you're into TypeScript, Node, Python, or Rust, you'll be happy to know there are community-made packages out there too!
Take a look at this: (portal.thirdweb.com).

Why This Matters for RPC:

Imagine this: if you're using a decentralized RPC network or a premium lane that charges for high-demand tasks (think archival scans or SWQoS sends), x402 has your back. It allows you to ramp up your usage on the fly without stressing about whether you've planned enough capacity in advance. Pretty convenient, right? Pretty handy, right?. One of the great things about the client code is that it handles the fetch for you while keeping all your business logic just the way you want it. Hey, just a quick note: the 402 status code isn’t actually designated for payments in the official HTTP specs. Its use really depends on how the community decides to handle it, so be sure to check for features in a way that’s smooth and flexible. If you want to dive deeper into it, feel free to check it out here.

Absolutely! Here’s a quick example of how you can wrap your fetch requests with x402 and set a budget for how much you’re willing to spend on each call.

// npm i x402-fetch viem
import { wrapFetchWithPayment } from "x402-fetch";
import { createWalletClient, http } from "viem";
import { privateKeyToAccount } from "viem/accounts";
import { base } from "viem/chains"; // e.g., USDC on Base per provider requirements

// Wallet used for per-request payments
const account = privateKeyToAccount(process.env.PK!);
const wallet = createWalletClient({ account, transport: http(), chain: base });

// Wrap global fetch
const fetchWithPay = wrapFetchWithPayment(fetch as any, wallet, /*maxValue=*/ BigInt(1_00_000)); // 1 USDC in 6dp

// Call a paid endpoint (e.g., premium archival or SWQoS gateway)
const res = await fetchWithPay(process.env.PAID_RPC_URL!, {
  method: "POST",
  headers: { "content-type": "application/json" },
  body: JSON.stringify({
    jsonrpc: "2.0",
    id: 1,
    method: "getTransactionsForAddress",
    params: ["<address>", { limit: 100, order: "desc" }],
  }),
});
const data = await res.json();

Thirdweb's got you covered! They’ve got this nifty HTTP proxy that takes care of 402 payments for you, all depending on how you've set up your wallet. Super convenient, right? This really simplifies things for your frontends! Check it out here!.

Operational notes:

  • Budget guardrails: It’s a good idea to establish a maximum limit for each request and keep track of your overall spending. Don’t forget to set up alerts to help you stay on top of things!
  • Observability: Just a heads-up--make sure to log any time you run into those 402 challenges. Also, don’t forget to keep an eye on the on-chain transaction hashes that come from the facilitator client. It’ll really help with tracking later on!
  • Smooth backup plan: If the provider isn't ready to roll with x402, just fall back on your flat-rate provider as a backup option.

Hey, so there are a few cool decentralized RPC projects on Solana, like GenesysGo’s Shadow RPC/Premium, that are now taking USDC for payments. Plus, they're also rewarding operators. Pretty neat, right? With x402, you’ve got an easy way to tap into these paid services and use them programmatically, especially as they start tackling those 402 challenges. Hey, you should definitely take a look at chaincatcher.com! It's got some interesting stuff!


Putting it together: a geo‑distributed, SWQoS‑aware, x402‑enabled blueprint

  • Regions
  • Check out the NA West region, specifically LAX, and for NA East, you can’t go wrong with EWR or NYC. If you're thinking about Europe, FRA and AMS are great options. And for the APAC region, definitely look into SG and TYO! Sure! Just choose two key areas that align with your largest user groups. For instance, you could go with LAX and FRA.
  • Providers
  • Reads: Go ahead and set up Triton and QuickNode in an active-active configuration. Don't forget to connect users to the nearest region using provider geo-DNS and by running your own latency tests. It's super important for ensuring they get the best experience possible! Check it out here.
  • Streams: We're using Helius LaserStream for gRPC, and we've got a backup WebSocket option from another provider just in case. More info here.
  • Writes: Go with Helius Sender as your main pick, since it has staked connections in various regions. For your backup, consider using ERPC SWQoS in a separate region. Find out more here.
  • Burst economics Hey there! If you're working with a lot of archives, backfilling, or planning those big NFT drops, it's a good idea to upgrade to a paid x402 endpoint as soon as you reach your internal limits. Trust me, it'll make things a lot smoother! If you’re not sure what to go with, just stick to flat-rate plans. They’re usually pretty straightforward and hassle-free! You can check out all the details right here.
  • Controls Hey there! Just a heads-up: we've got to set the minContextSlot for reading across different providers. Also, let’s implement specific timeouts for each method to keep things running smoothly. For example, we could allocate about 300 milliseconds for getLatestBlockhash and give ourselves a little more breathing room--around 2 seconds--for gPA. Sound good?
  • When you're writing, just keep in mind that your writes should be idempotent. Think of your signature as the unique ID for each action you take, so try not to re-sign the same intent. It helps keep everything organized! To keep everything running smoothly, just use getSignatureStatuses. It'll help you stay in sync!
  • Observability Make sure to keep an eye on the p50, p95, and p99 metrics, along with the rates for 5xx, 429, and 403 errors. Also, don’t forget to monitor the SWQoS lane usage for each provider! Make sure to set up alerts for any drift between the streams--like if the slot delta goes over N. Also, keep a lookout for any unusual 402 payment rates that might pop up.

Practical, precise configuration details that move the needle

Hey there! Just a quick tip: when you're using getLatestBlockhash or getAccountInfo from a backup provider, don’t forget to add in minContextSlot. It's an important step! This way, you won’t have to worry about running into outdated information when a failover happens. Feel free to take a look at the details right here. There’s some useful info waiting for you!

  • For sendTransaction: First things first, make sure to run those preflight checks on your main provider. Once that’s all set, you can go ahead and send your transaction with skipPreflight:true. Just remember to set a capped maxRetries to help keep that tail latency in check before things start going off the rails. If you want to dive deeper into this topic, check it out here. It's a great resource! Hey there! If you're working with SDKs that simplify things a bit, just double-check that you can still adjust preflightCommitment and maxRetries directly. It’s a good idea to keep those settings within your control!
  • So, if you're taking charge of your own stacks and zeroing in on SWQoS, here's what you need to know: Hey Validators! Just a quick heads-up: make sure to set up the staked-nodes-overrides with that identity→lamports mapping. It’s an important step!
  • When you're working with RPC, just use --rpc-send-transaction-tpu-peer to send your transactions through your paired validator. It's pretty straightforward! Honestly, a lot of teams might lean towards a managed SWQoS service instead. It just seems simpler that way! If you're looking for more details, you can check it out here. It’s got all the info you need!

So, when we talk about archives and history... You know, it really makes more sense to use those provider-specific consolidated endpoints, like getTransactionsForAddress, rather than diving into block scans. It just keeps things simpler and more efficient! You’ll usually notice that both the costs and the wait times go down quite a bit. If you’re looking for more info, definitely take a look here. You'll find some great insights!

  • So, when it comes to covering the whole world: If you're looking for some serious speed in the APAC region, definitely check if your provider has real APAC Solana clusters. A couple of solid options are Helius in Singapore or Tokyo, and there's also GetBlock in Singapore. They're worth looking into! If you want to dive deeper into this, just click here.

How Firedancer’s rollout affects your RPC strategy

Firedancer just launched its second high-performance validator client, and I’ve got to say, this is a game changer! It really helps to reduce the risks of depending on a single client and boosts those throughput limits, too. Pretty exciting stuff! So, from what I’ve seen in multiple reports, Firedancer actually launched its mainnet adventure in December 2025. They kicked things off with a small group of validators, but it was a big deal since they had already cranked out tens of thousands of blocks during their testing phase! This is a great indicator for your capacity planning in 2026, but just a heads up--it doesn’t mean you can let your RPC HA discipline slide. Keep that focus sharp! If you want to dive deeper into the topic, check it out here: coinpedia.org. It’s got all the details you need!


KPIs and SLOs we recommend to stakeholders

  • Read API SLOs So, the aim is to keep our p95 response time under 120 ms for getLatestBlockhash, and for getProgramAccounts, we want to stay below 250 ms, especially when we're using filters that are near the popular programs. We’re aiming to keep the error rate at zero or as close to it as possible. 2%.
  • Stream SLOs
  • When it comes to our streams, we're shooting for missed slots to be under zero. 01%. So, when we're running our dual streams under regular load, we want the slot delta to be under 1. And if there's a provider failover, we should aim to bounce back within 3 seconds.
  • Write SLOs Our goal is to make sure that 99% of wallet transactions go through in less than two leader rotations, even when there's a lot of network traffic, by using the SWQoS lanes. Also, make sure that the end-to-end user confirmation callback stays under 2 seconds for the 95th percentile.
  • Cost guardrails We're limiting x402 bursts to just 10% of our daily request volume. Plus, we're putting a cap on how much we spend per request, and we’ll be on the lookout for anything unusual.

Fast provider short‑list and what to trial first

  • For trading or searching: If you're getting into trading or just need some solid search tools, definitely take a look at Helius Sender and LaserStream. They’ve got some great features that can really help you out! If you're looking for some secondary options, definitely check out ERPC SWQoS. It's worth your time! Also, QuickNode with MEV protection is a pretty solid pick as a third option, especially if you need to handle a mix of workloads. If you’re curious and want to dive deeper, check out helius.dev for more info!
  • Consumer wallets/explorers: If you’re on the hunt for fantastic read performance and a solid archive of historical RPC data, definitely check out Triton. It’s a great choice! Helius really stands out when it comes to history APIs. If you've noticed a bunch of users from the APAC region, you might want to think about using GetBlock’s region pinning feature. It could really make a difference! If you’re looking for more details, just check out triton.one. You’ll find plenty of info there!
  • Enterprise cross-chain teams: If you're working in the enterprise realm and looking for a way to streamline your operations, Alchemy is definitely the solution you want. It's especially great for handling Solana's unique throughput and streaming needs! Hey, don't forget to take a look at Syndica’s uptime SLAs--they’re definitely worth checking out! If you want to dive deeper, check out alchemy.com for more info!
  • If you care about decentralization: You might want to check out dRPC or GenesysGo Shadow RPC Premium. They’re great choices, especially since they’re ready for x402 clients in those pay-per-call lanes. Hey, if you're interested, you should definitely take a look at the details over at drpc.org. It's got some cool info about their new features!

Final checklist (print this)

  • Make sure you assign users to specific regions. It’s better to pin the endpoints rather than just sticking with a generic global default. ". Alright, so here’s the deal: you’ll want to choose two read providers and one streaming provider that lets you replay stuff. Oh, and don’t forget to pick a second streaming option as a backup just in case. Make sure to send your writes through the SWQoS lanes, and don’t forget to monitor how the retries are handling things.
  • First, go ahead and set up the minContextSlot. After that, make sure to create time budgets for each method you’re working on.
  • Go ahead and start using x402-fetch for metered endpoints. This way, you won't need to worry about provisioning for those sudden spikes in usage.
  • Make sure to do weekly check-ins to see how we're doing. Let's keep an eye on the p95 and p99 metrics, and don't forget to track the cost for every million calls based on the method we're using.
  • Plan a game day every month! Pick a region to test, push a provider to its limits, and see how well your SLOs stand up to the challenge.

If you stick with the architecture and providers I mentioned earlier, you’ll really deliver the Solana experience your users crave. We're talking super quick confirmations, fast state reads, and dashboards that stay on point, even when the network is bustling with activity. It’s all about keeping things running smoothly for your users!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.