7Block Labs
Blockchain Technology

ByAUJay

What Open-Source Servers Are Optimized for Web3 Endpoints?

Short Version

If you’re on the hunt for high-throughput, low-latency Web3 endpoints, there’s no need to shell out for a black-box gateway. Nowadays, some of the top open-source servers can totally handle the job--think Erigon, Reth, Nethermind, Besu, Geth (EVM); Solana validator with Geyser; Graph Node and Subsquid for your GraphQL needs; Cosmos/CometBFT stacks; Pathfinder/Juno for Starknet; and Electrs for Bitcoin. By deploying these with the right setup, tuning, and precautions, you can create production-grade endpoints that really get the job done. Check out more details here: (docs.erigon.tech).


Who this is for

Decision-makers at startups and larger companies are figuring out whether they should set up their own Web3 endpoints (like RPC, WS, GraphQL, or gRPC) or if they should just go with managed providers.


What we mean by “Web3 endpoint”

  • You've got execution endpoints (JSON-RPC over HTTP/WS/IPC) ready for EVM and other chains. Check them out here: (ethereum.org)
  • Plus, there are GraphQL or gRPC data APIs that sit on top of full nodes or specialized indexers. Dive into the details here: (besu.hyperledger.org)

The short list: open-source servers that actually optimize for Web3 workloads

EVM execution clients (Ethereum and EVM chains)

1) Erigon (Go)

Why teams choose it

  • It has a dedicated RPC process (rpcdaemon) that you can scale on its own. Plus, it supports HTTP/WS, GraphQL, and gRPC. It even comes with batching and read concurrency flags, making it super efficient for handling high requests per second. (github.com)
  • Erigon is really good at disk I/O efficiency thanks to its staged sync and MDBX-backed flat state. This makes it a solid choice for archive/RPC while keeping storage costs low. (github.com)
  • It offers a straightforward port model with clear recommendations on what to expose--public for P2P and private for Engine/DB/RPC. (docs.erigon.tech)

Operator Notes That Move the Needle

  • Run rpcdaemon separately from your main process, lock it down to local NVMe storage, and tweak these settings: --rpc.batch.concurrency, --rpc.batch.limit, and --db.read.concurrency. If you’re really going for max throughput, consider turning off compression with these flags: --http.compression=false and --ws.compression=false. Check it out on GitHub.
  • For internal services, you’ll want to use the gRPC KV interface for hot paths since it beats out JSON-RPC in performance. You can dive into the details over at the docs.

2) Reth (Rust)

Why Teams Choose It

  • Reth offers a sleek, modern, and modular client with a super-fast RPC stack (thanks to jsonrpsee). It's perfect for high-performance RPC, MEV, simulations, and indexing, and it's good to go with production-ready features as of Reth 1.0. You can check it out here.
  • Getting started is a breeze with the command “reth node --http --ws,” plus you get some fine-tuned API control. It also supports batching and WebSocket subscriptions, making it pretty flexible. For more details, visit reth.rs.

Operator Notes

  • Think of this as a performance-first RPC node. Stick to a careful rollout plan and use blue/green upgrades--make sure to keep alternate clients ready to go for continuity. Remember, a bug back in September 2025 caused a brief hiccup for some mainnet nodes. (cryptotimes.io)

3) Nethermind (C#/.NET)

Why teams choose it

  • It’s been battle-tested on mainnet and the big L2s; has awesome I/O and RocksDB optimization; the latest 1.33.x version brings in a fresh UI and live pruning options; plus, there are in-depth performance-tuning docs available with tried-and-true sync profiles. Check it out on GitHub!

Operator notes

  • To achieve quick synchronization and a steady RPC, it’s best to pair the HeavyWrite DB mode with a high outbound peer rate and properly adjusted threads. Just remember to keep an eye on SSD sustained write performance, not just the peak. (docs.nethermind.io)

4) Hyperledger Besu (Java)

Why teams choose it

  • Enterprise-friendly: It comes with built-in JWT/RBAC for RPC, plus liveness/readiness endpoints and solid batch request controls. The performance improvements are still being rolled out in the latest 25.x releases. Check it out here!

Operator Notes

  • Only enable the namespaces you actually need.
  • Set --rpc-http-max-batch-size to keep your load predictable.
  • Don't forget to front your RPC with TLS termination and JWT auth where it makes sense. For more details, check out the guide here.
  1. Geth (Go)
    Why teams are into it
  • Geth is the tried-and-true Ethereum client that’s been put through its paces. It comes with easy-to-use RPC/WS/IPC support and even includes an EIP‑1767 GraphQL endpoint. Plus, you can count on regular updates for performance and APIs. Check it out at (geth.ethereum.org).

Operator Notes

  • The latest 1.16.x releases have brought some neat RPC improvements, such as eth_sendRawTransactionSync, which is super handy for L2 workflows. Plus, they've got features for scheduling network forks, so make sure you plan your upgrades accordingly. Oh, and just a heads up--the personal RPC namespace was removed in late 2024. You can check out more details here.

When to Mix Clients

  • If you're working with RPC fleets, it's a good idea to have at least two different clients running behind a health-aware proxy. This way, you can better handle any client-specific bugs and it fits right in with Ethereum’s philosophy of client diversity. Check out more details on ethereum.org.

Solana: validator RPC and the Geyser stack

What “optimized” means on Solana

So, the main validator on Solana gives you access to a robust JSON-RPC. But what really takes things to the next level is the Geyser plugin framework. It allows for real-time, high-volume streaming through gRPC or direct sinks like Kafka or Postgres. This is what makes it possible to create tailored RPC-like services. You can find more info on it here.

Production-grade options

  • If you’re looking to set up a validator with Geyser gRPC, you’ll want to go for some solid hardware. The recommended specs are around 24+ cores and a whopping 384GB of RAM to handle those stress-tested streaming sessions. Check out the details here.
  • There are also some super handy lite RPCs out there, like the open-source “solana-lite-account-manager.” This tool allows you to create lightweight RPCs on top of Geyser streams, making things a lot easier. You can find it on GitHub here.

Practical Guidance

  • Check out specialized plugins to offload your account, slot, and transaction streams to Kafka/Postgres. It’s a smart move to scale your consumers horizontally and steer clear of heavy HTTP calls like getProgramAccounts without using filters or pagination. You can find more info on this GitHub link.
  • Just a heads up--some public providers tend to rate-limit those heavy Solana queries. So, it’s a good idea to design your system for filtering and streaming instead of relying on polling. You can read more about it here.

Query layers that serve your app smarter than raw RPC

  1. The Graph’s Graph Node (GraphQL)
  • This bad boy indexes subgraphs and serves them up via GraphQL. It’s got your query-only nodes, Postgres sharding/replicas, and Firehose ingestion covered. Just a heads-up, though: for some features, it requires archive and trace support on the underlying EVM RPC. Check it out here: (thegraph.com)

Operator Notes

  • Set up specific “query nodes” to keep them separate from indexing nodes, and toss in some Postgres read replicas to help handle those traffic spikes. Don't forget to check out Prometheus metrics on port 8040 for monitoring. You can find more info here.
  1. Subsquid (Squid SDK + Network)
  • This is an open-source ETL+GraphQL framework that digs into historical data from a decentralized Archive network. It often does away with the need for an archival RPC. Squids are pretty impressive, typically indexing tens of thousands of blocks per second and providing a GraphQL API powered by Postgres. Check out the details here: (docs.devsquid.net)

Operator Notes

  • Leverage the Squid SDK to batch ingest logs, receipts, and traces, then write everything to Postgres. Also, set up an API for easy access. For real-time processing, mix Archive ingestion with live RPC tails. Check out the details here: docs.devsquid.net

When to Add a Query Layer

  • If your product is churning out a ton of costly historical queries (like getLogs over big ranges, traces, or decoded events), think about implementing a query layer. It’ll give you better speed and save you money compared to just scaling raw RPC. Check it out here: (thegraph.com)

Cosmos (Cosmos SDK + CometBFT): gRPC first, REST via gRPC‑gateway

  • Cosmos SDK nodes provide gRPC for state queries along with gRPC-gateway REST. On the other hand, CometBFT handles the P2P consensus and has its own RPC (running on port 26657) featuring pubsub and light-client proofs. Just a heads up--don't make that publicly accessible without some proper controls in place. (docs.cosmos.network)

Operator notes

  • Make sure to keep gRPC (9090) secured behind your API gateway with TLS. Only expose CometBFT RPC internally or through strict ACLs. For better performance and type safety, go for gRPC when possible. Check out the details here: (docs.cosmos.network)

Starknet: Pathfinder (Rust) and Juno (Go)

  • Both projects are open-source full nodes that stick to the Starknet JSON‑RPC spec. As of late 2025, the current versions are RPC 0.8/0.9/0.10, while you'll find deprecation notices for the older 0.6/0.7 versions. Just make sure that your node and SDKs are in sync with the RPC version you’re using. (eqlabs.github.io)

Operator Notes

  • Make sure to sync up your Pathfinder/Juno versions with the target RPC and your app libraries (like starknet.js, etc.). It’s best to steer clear of mixing unsupported RPC versions across different environments. Check it out here: starknetjs.com.

Bitcoin: Electrs for wallet-grade endpoints

  • Electrs is a super efficient Electrum server built in Rust, and it’s mainly used by explorers (plus some variations like mempool.space) to quickly handle wallet queries. The latest updates are all about improving concurrency and making indexing friendly for SSDs. Check it out on GitHub!

Operator notes

  • This setup is designed for lightweight, high-QPS balance/history queries. It works best when paired with your bitcoind and a storage plan that’s friendly for caching.

Reference architectures that work in 2025

1) EVM RPC for dapps/wallets (multi-client)

  • We're using a setup with 2× Erigon (archive) and 1× Reth (full) behind a JSON-RPC aware proxy that keeps an eye on health checks and does method-aware routing. Basically, we lean on Erigon when we need to dig into heavy historical data, while Reth is our go-to for those quick eth_call/trace_* tasks that need low latency.
  • If you're looking for open-source proxies to tweak for your needs, check out status‑im/eth‑rpc‑proxy for health/failover capabilities, or LlamaNodes’ web3‑proxy for some caching and load-balancing, plus private tx fan-out. You can find it all here: (github.com)
  1. Solana Real-Time Analytics/API
  • 1× validator using Geyser Postgres/Kafka plugins, plus horizontally scaled API consumers that handle specific HTTP endpoints. We stick to vanilla JSON‑RPC just for select queries, while streaming everything else through gRPC/consumers. Check it out on GitHub.

3) GraphQL-First Product

  • Set up a Graph Node or Subsquid in front of one or more EVM nodes. Use a primary Postgres database along with one or more read replicas specifically for handling queries. You can show off your data using Prometheus/Grafana dashboards that are exposed by the nodes. Check out more info at thegraph.com.

4) Cosmos Service

  • We use gRPC for state reads and rely on gRPC-gateway REST for browser clients. The CometBFT RPC is secured internally, and when necessary, we prefer using light-client proofs. Check it out in more detail here: docs.cosmos.network

Tuning that materially improves Web3 endpoints

EVM Nodes

  • Erigon: Place the rpcdaemon on the same host and disk as your data. Tweak the batch and read concurrency settings, and don’t forget to turn off compression when you need peak throughput. Also, think about using GraphQL for multi-field reads to cut down on those pesky round trips. Check it out here.
  • Reth: Keep your hot RPC traffic separate from the peer traffic. It’s a good idea to have WebSocket subscriptions running on dedicated instances. Treat upgrades like managing a fleet--stagger your nodes to avoid downtime. More info can be found here.
  • Nethermind: Go for HeavyWrite DB mode when syncing. Make sure your SSD can handle sustained throughput and fine-tune the max peers and outgoing connection rates. Oh, and using the latest RocksDB builds can really help. Discover more here.
  • Besu: Don’t forget to enable JWT authentication! Set some caps on your batch sizes and use readiness/liveness endpoints to help with autoscaling and rollouts. You can read more about it here.
  • Geth: If you have a lot of small RPC calls, turn on GraphQL (--graphql) to simplify things. Keep your RPC namespaces tidy and maintain strict CORS/vhosts to keep everything secure. More details can be found here.

Solana

  • It’s better to use Geyser streaming instead of heavy polling methods like getProgramAccounts. Make sure to filter and paginate your results! Save the JSON-RPC calls for when you need specific point reads. (solana.com)

Cosmos

  • It's a good idea to lean towards gRPC for your typed, scalable queries. If you have to expose CometBFT RPC, be sure to limit its exposure and handle CORS/headers with care. Check out the details in the Cosmos documentation.

Starknet

  • Let’s standardize the current RPC versions across all nodes and SDKs to steer clear of those sneaky incompatibilities. Don't forget to keep an eye on the deprecation timeline. You can check out more details here.

Security and access control checklists

  • Keep those execution/consensus Engine APIs under wraps--don't go exposing them publicly (like the 8551 port on EVM). Stick to private networks for your EL<->CL communications and make sure to use JWT secrets. (docs.erigon.tech)
  • Whenever possible, turn on authentication (like the Besu JWT or good old username-password); make sure you wrap TLS at your ingress and don’t forget to keep those node ports private. (besu.hyperledger.org)
  • If you're using CometBFT, trust me, the project makes it clear: don’t expose that RPC server without some solid protections. Set up ACLs, rate limits, and keep an eye on request size caps. (docs.cometbft.com)
  • Don’t forget about role separation: have validators and peers on one tier, put your RPC and query nodes on another, and keep those indexers on a separate tier.

1) High-QPS EVM Read API (Finalized Reads)

  • For that 2× Erigon archive setup with rpcdaemon, you’ll want to run:

    erigon --private.api.addr=127.0.0.1:9090 --http=false  
    rpcdaemon --datadir=/data/erigon --http.api=eth,erigon,debug,trace,web3,net --rpc.batch.concurrency=64 --db.read.concurrency=256 --http.compression=false  
  • And for a bit of “hot” methods action, just fire up a 1× Reth full node:

    reth node --http --http.api eth,net,web3,debug,trace  
  • We’ve got the proxy set up so it routes those heavy historical calls to Erigon while sending the “live” calls over to Reth. Plus, we’re keeping an eye on everything with health checks and circuit breakers on both ends. You can find more details here.

2) Solana Account Analytics

  • You can set up a validator with Geyser Postgres/Kafka plugins (think 20+ threads and batched writes), and then create a stateless API to pull data from the Postgres/Kafka topics. For live updates, just use WS/gRPC streams and, whenever you can, filter on-chain right at the source. Check it out here: github.com.

3) Query-First Product on EVM

  • Use Graph Node featuring “query-only” nodes along with Postgres read replicas. Make sure your upstream RPC is compatible with EIP‑1898 and trace_filter (archive+traces). Plus, it’s a good idea to set up Prometheus/Grafana dashboards for the operators. Check out more details at thegraph.com.

Emerging best practices (2025)

  • Consider using GraphQL or gRPC to bundle multiple JSON‑RPC calls into a single request when you can. This is supported by Geth GraphQL and Erigon GraphQL/gRPC. Check it out here.
  • If you're working with Solana, think about ditching those heavy polling patterns (like getProgramAccounts over big spaces) for streaming indexers or Geyser consumption instead. More info can be found here.
  • When it comes to Ethereum, make sure you're on top of client releases that could tweak RPC behavior and roll out new methods. For instance, Geth 1.16.x now includes eth_sendRawTransactionSync, and several clients have made changes around tracing and blob behaviors since Cancun. Remember to plan your maintenance windows! More details are available here.
  • For Starknet, it’s a good idea to standardize your RIP (RPC Interface) versions in your CI environment and keep an eye on node/SDK compatibility whenever you do upgrades. You can read up on it here.

Decision guide (quick picks)

  • Want the quickest and cheapest way to handle EVM archive reads at scale? Check out Erigon’s rpcdaemon. You can scale it horizontally or throw in a query layer like Graph Node or Subsquid to ease those historical queries. (docs.erigon.tech)
  • Looking for low-latency eth_call or traces during peak times? Just add some Reth nodes to your setup and direct your hot paths to them. (reth.rs)
  • If you’re an enterprise that needs tighter auth and better visibility, consider using Besu with JWT and health endpoints, all set up behind your corporate ingress. (besu.hyperledger.org)
  • Got your sights set on building real-time analytics for Solana? You’ll want a Validator paired with Geyser (Kafka/Postgres) and go for a stream-first API approach. (github.com)
  • Shipping multi-chain analytics with some complex historical queries? Subsquid or Graph Node on top of your node fleet will do the trick. (docs.devsquid.net)
  • Need Starknet RPC up and running today? Grab Pathfinder or Juno that aligns with RPC 0.9/0.10, but steer clear of those outdated versions. (docs.starknet.io)

Final take

Running your own endpoints has become pretty common, and it can actually give you a leg up when it comes to performance, cost savings, and keeping control over your data. Kick things off with the right client that matches your workload, throw in a query layer when your product requires it, and handle your endpoint fleet like you would with any crucial API: make sure it's versioned, easy to monitor, and properly maintained for upgrades.

If you're looking for a solid reference deployment or just a little tune-up for your current nodes, 7Block Labs has got you covered. They can design, set up, and hand over a robust, autoscaled stack that’s customized to fit your traffic needs.

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.