ByAUJay
What Open-Source Servers Are Optimized for Web3 Endpoints? Most Stable Open-Source Servers for Blockchain APIs
Decision-Ready Guide to the Most Stable, High-Throughput Open-Source Servers for Blockchain APIs in 2026
Here’s your go-to guide for the most reliable and high-throughput open-source servers for blockchain APIs this year. We’ve got you covered with easy-to-use configurations and the latest best practices for production RPC, whether you’re at a startup or an established enterprise. All the info you see here is up-to-date as of January 7, 2026.
Top Open-Source Servers
- Geth
A solid choice for Ethereum. It’s super stable and has a ton of community support. - OpenEthereum
Known for its high performance, this one is great if you're looking for flexibility. - Hyperledger Fabric
This one's perfect for permissioned networks and has a robust feature set. - Parachain
If you're diving into Polkadot, this is the way to go. It offers great scalability. - Celo
Perfect for mobile-first applications, Celo is all about inclusivity.
Sample Configurations
Geth Configuration
Here’s a straightforward setup you can copy and paste:
geth --syncmode "fast" --rpc --rpcaddr "0.0.0.0" --rpcport "8545" --rpcapi "eth,web3,personal" --cache=1024
OpenEthereum Configuration
For OpenEthereum, check this out:
./openethereum --chain=mainnet --jsonrpc-port=8545 --jsonrpc-interface=all --jsonrpc-cors=all
Hyperledger Fabric Configuration
Ready to get started with Hyperledger? Here’s a basic config:
version: '2'
services:
peer0.org1.example.com:
image: hyperledger/fabric-peer
ports:
- 7051:7051
environment:
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/fabric/msp
Best Practices
- Prioritize Security: Always secure your endpoints and monitor for unusual activity.
- Load Testing: Before going live, put your setup through its paces to handle peak loads.
- Regular Updates: Keep your servers updated with the latest patches and versions.
- Community Engagement: Stay active in forums and community channels to keep up with the latest trends and solutions.
By following these tips and using the configurations above, you'll be well on your way to running a top-tier blockchain API server in 2026!
TL;DR for busy decision‑makers
- Ethereum stack: Go with Geth or Nethermind for solid production stability. If you're after top-notch RPC throughput, take a look at Reth or Erigon. And if enterprise features are your thing, Besu’s got you covered. Mix and match clients to boost resilience. (geth.ethereum.org)
- Solana: Set up an Agave/Jito validator with Geyser gRPC. Make sure to plan your upgrades around 1.18.x and the Agave transition, and don’t forget to put those heavy read streams behind Geyser plugins. (github.com)
- OP Stack/Arbitrum: Use op-geth alongside op-node, but stick to the latest Nitro (ArbOS 51) for Arbitrum. Keep sequencer internals under wraps, and route transaction submissions to the sequencer via HTTP. (docs.optimism.io)
- Bitcoin: Pair bitcoind with electrs (mempool/electrs fork) to get wallet/explorer-grade RPC that scales smoothly. (github.com)
- Cosmos SDK chains: Serve gRPC first and use REST through the gRPC-gateway. Keep your CometBFT RPC (26657) private or locked down tight. (cosmos-docs.mintlify.app)
- Aptos & Sui: Aptos fullnodes are exposing REST, while Sui is shifting its production workloads over to gRPC. Get ready for that gRPC enablement and say goodbye to JSON-RPC. (docs.sui.io)
Why “server choice” matters more in Web3 than in Web2 APIs
Unlike traditional REST stacks, Web3 endpoints interact with stateful consensus software that needs to be in sync with an active network. In this setup, your “API server” often functions as the node client itself, and your selections have a real impact on:
- Understanding finality and correctness during reorgs
- RPC throughput for traces, logs, and simulations
- Ensuring upgrade safety through network hard forks
- Costs: disk space, IOPS, CPU usage per query
Picking reliable open-source servers and setting them up the right way can really help minimize the impact of outages and cut down on RPC expenses.
Ethereum execution clients (open‑source): stability vs performance trade‑offs
All five of the major execution clients are open-source. A good approach is usually to "run two" for better diversity and failover.
1) Geth (Go)
- Why go with it: It’s super battle-tested, has solid defaults, and works well with various tools. (geth.ethereum.org)
- Key changes for operators in 2024-2025:
- The default state scheme is shifting to PathDB, which will handle built-in historical state pruning for fresh databases. (mygit.top)
- They’re putting limits on the eth_feeHistory percentiles to help manage those tricky requests. This is a great way to safeguard your infrastructure. (newreleases.io)
- Minimal stable HTTP/WS enable:
geth \ --http --http.addr 0.0.0.0 --http.port 8545 \ --http.api eth,net,web3 \ --http.corsdomain=https://yourapp.example \ --http.vhosts=yourrpc.example \ --ws --ws.addr 0.0.0.0 --ws.port 8546 --ws.api eth,net,web3
Flags and Transport Behavior from Geth Docs
It's super important to remember: never expose your admin, personal, or debug RPC endpoints publicly. This is a serious security risk! You can check out more details in the Geth documentation.
When to Choose Geth
If you’re diving into wallets, exchanges, or production apps, Geth often gets the nod because of its stable and mature behavior. It might not be the flashiest option out there, but when you’re looking for reliability over cutting-edge speed, it’s hard to go wrong with Geth.
2) Nethermind (C#)
- Why choose it: It’s packed with features like JSON-RPC support, super-fast tracing, robust L2 integrations, and ongoing performance enhancements (think parallel downloads and AVX512/ARM64 improvements). Check it out here: newreleases.io.
- Useful options can be found under JsonRpc.* (including ports, logging, and per-method controls). You can also tweak request logging and stats reporting when you’re in production. More info is available here: docs.nethermind.io.
When to Opt for Nethermind
If you're working with high-volume L2/rollup stacks, or if you're part of an explorer or infrastructure team looking for more advanced RPCs and tracing capabilities, then Nethermind is definitely the way to go.
3) Besu (Java, Hyperledger)
- Why choose it: It comes packed with enterprise features like on-chain permissioning and some really solid metrics. Plus, since the Merge, it’s been showing some nice performance improvements with median block processing times dropping from around ~1.71s to ~0.49s, even on basic VMs. Don’t forget to enable Bonsai, Snap, and those high-spec RocksDB flags! (besu-eth.github.io)
When to Choose Besu
If your project has enterprise governance or permissioning requirements and you need solid performance, Besu might be the way to go.
4) Erigon 3 (Go/C++ modules)
- Why choose it: It's all about efficiency and saving on storage! You can run RPC through either an in-process setup or a standalone
rpcdaemon. Plus, there are three deployment modes--embedded, local, or remote--so you can easily fit it into your scale-out topologies. Check it out on GitHub! - 2025 milestone: Get ready for the production release of Erigon 3! The 2.x version is going to be deprecated as we gear up for the upcoming Pectra, so be sure to migrate if you haven’t already. More info is available on their website.
- Tip: For better isolation, consider running
rpcdaemonout-of-process, or if you need to scale horizontally for reads, run it remotely with gRPC. You can learn more about this in the docs.
When to Prefer Erigon
So, here’s the deal: if you’re dealing with archive-heavy, trace-heavy, or cost-sensitive stacks, Erigon might just be your best friend. It really shines in scenarios where you need lean storage and a flexible RPC topology.
5) Reth (Rust, Paradigm)
- Why choose it: It's production-ready, offering amazing RPC throughput and super quick sync times. Plus, it’s been audited and comes with solid guidance for anyone running high-IOPS NVMe setups. Check it out here.
- Performance signals: Independent and partner benchmarks have revealed impressive multi-x RPC gains compared to Geth and Erigon in a bunch of scenarios. The debug and trace features really shine, too! Just make sure to validate it for your specific workload. More details can be found here.
- Minimal config:
reth node \ --http --http.addr 0.0.0.0 --http.port 8545 \ --http.api eth,net,trace \ --authrpc.addr 127.0.0.1 --authrpc.port 8551 \ --authrpc.jwtsecret /path/jwt.hex
CLI and Transport Details from Reth Docs
Check out the CLI and transport info from the Reth documentation right here: reth.rs.
When to Choose Reth
If you're an RPC provider or a power user on the hunt for maximum QPS and minimal latency, Reth might just be your perfect match. Plus, if staying up-to-date with the latest upgrades is a priority for you, Reth has got your back.
Practical diversity mix (2026)
- Primary: Reth or Erigon when you need those high-QPS reads
- Secondary: Geth or Nethermind for a solid, stable fallback
- Optional: Besu if you're working with enterprise chains or components
This also boosts the diversity of Ethereum clients, something the ecosystem is really striving for. (ethereum.org)
Layer‑2 stacks: OP Stack and Arbitrum
OP Stack (Optimism, Base, many app‑chains)
- Components:
- op‑geth (handles execution/RPC, and it's JSON‑RPC compatible)
- op‑node (manages consensus/rollup RPC with the “optimism_” namespace)
- Important RPC Info for Operators:
- Don’t count on a public mempool; make sure to route transactions through the op‑geth’s sequencer HTTP endpoint that’s already set up. You’ll need to handle your own submitter retries and backoff. (Check out the docs)
- op‑node gives you access to rollup state through JSON‑RPC. Use it for things like sync and health checks, but steer clear of using it for anything user-facing. (More details here)
When to Prefer OP Stack
If you're looking for EVM parity along with straightforward multi-appchain operations, then OP Stack is the way to go. It's open-source and super well-documented, making it a solid choice for your projects.
Arbitrum Nitro
- Make sure to keep Nitro updated! ArbOS 51 “Dia” is set to go live on January 8, 2026, on One/Nova--operators need to be on v3.9.3 or later to stay in sync. (docs.arbitrum.io)
- Quick tip: the official docs stress the importance of using release builds only and recommend scaling CPU cores for better RPC concurrency. (docs.arbitrum.io)
When to Choose Nitro
Nitro is your go-to choice if you're looking for the largest Layer 2 by activity. It’s perfect for production tooling and comes with straightforward upgrade schedules. Check it out here: (github.com).
Solana: validator‑co‑located RPC plus Geyser gRPC
- You can run Agave (formerly known as the Solana Labs validator) or check out the Jito-Solana fork. Here’s what you need to expose:
- HTTP JSON-RPC (8899) for all those compatibility reads and writes.
- WebSocket subscriptions (8900) so you can get real-time updates.
- Geyser gRPC for super-high throughput streaming of accounts, slots, and transactions. You might want to use plugins like jito-foundation’s geyser-grpc. Check it out on solana.com.
- Looking ahead to 2025-2026:
- The 1.18.x train is your go-to stable line. The Solana Labs repo is shifting to Agave, which means some deprecations and RPC cleanups are on the way--so keep an eye on those changelogs! You can follow along at github.com.
- So, why should you care about Geyser? It helps you offload the heavy index and stream work from the validator, letting you scale your read services independently with tools like Kafka, DB, or gRPC microservices. More info can be found at docs.solanalabs.com.
Example: Enabling a Geyser gRPC Plugin
To get started with enabling a Geyser gRPC plugin, follow these simple steps:
- Locate Your Plugin Directory
Find the directory where your Geyser plugins are stored. This is usually under your Geyser installation directory. - Download the Plugin
Grab the Geyser gRPC plugin from the official repository. You can find it here. - Add the Plugin
Once you’ve downloaded the plugin, place it into the plugins directory you found in step 1. - Edit Your Configuration
Open your Geyser configuration file (usuallyconfig.yml) and make sure to include the gRPC plugin settings. For example:plugins: - geyser-grpc - Restart Geyser
To apply the changes, restart your Geyser server. You should see confirmation in the logs that the gRPC plugin has been enabled. - Verify the Installation
Once Geyser is running again, check your logs for any errors or messages confirming that the gRPC plugin is up and running!
And that’s it! You’re all set to enjoy the benefits of the Geyser gRPC plugin.
solana-validator \
--full-rpc-api \
--rpc-bind-address 0.0.0.0 \
--geyser-plugin-config /etc/solana/geyser.json
Check out the plugin project and wiring details in the Jito plugin repo over at GitHub.
Cosmos SDK chains (CometBFT)
- First up, serve gRPC on port 9090 and set up REST through the gRPC-gateway on port 1317 for your browser clients. Just make sure to keep your CometBFT RPC on port 26657 either internal or protected with strict access controls. You can check out more about this here.
- Here are some practical tips for operations:
- Terminate TLS for gRPC at your API gateway. And if you really need to expose port 26657, make sure to shape that traffic wisely since WebSocket pubsub can be a bit iffy if not handled properly.
When to Choose the Cosmos SDK Stack
The Cosmos SDK stack is a solid choice when you're after typed gRPC queries and well-structured, module-driven APIs. It's particularly awesome for building sovereign rollups or appchains.
Bitcoin: pair bitcoind with electrs for API scale
- So, bitcoind’s JSON-RPC works, but it’s not exactly the best when it comes to wallet or explorer user experience at scale. That’s where Electrs comes in. Built with Rust, it creates SSD-friendly indexes and supports both the Electrum protocol and HTTP APIs. Looking ahead to 2026, the mempool/electrs fork is set to be the go-to production-grade option (it’s also essential for Mempool 3.x). Check it out here: (github.com)
- When it comes to light mode versus full indexing, light mode does save on disk space, but you’ll notice slower lookups. For production, it’s better to go with full indexing and put some rate limits on the HTTP side. More info can be found here: (github.com)
Emerging L1s you may be evaluating
- Aptos: Fullnodes now have a REST API available! Operators should definitely check out the Node Health Checker (NHC) and aim to follow the high-IOPS guidelines for better stability. You can find more info here.
- Sui: Just a heads up, JSON-RPC is on its way out for production use. It's time to enable gRPC indexing and get your migration plans in order. It might also be a good idea to run JSON-RPC on a separate node while you transition. More details can be found here.
- Starknet: They’ve got some cool open-source nodes called Pathfinder (built in Rust) and Juno (in Go). Just a reminder: the RPC 0.6/0.7 versions are officially deprecated, so make sure to standardize on RPC 0.8/0.9/0.10 along with compatible client/tooling versions. Get the full scoop here.
Concrete, copy‑paste server configs that work
A) High‑throughput Ethereum read tier with Reth + NGINX
1) Reth Node (HTTP Only, Read Namespaces)
If you're working with a Reth node, keep in mind that it’s set up for HTTP only, meaning you can only read from namespaces.
reth node \
--http --http.addr 127.0.0.1 --http.port 8545 \
--http.api eth,net,trace,web3 \
--authrpc.addr 127.0.0.1 --authrpc.port 8551 \
--authrpc.jwtsecret /var/lib/reth/jwt.hex
Check out the Reth transport and API flags as detailed in the docs. You can find everything you need right here: (reth.rs).
2) NGINX front (HTTP/1.1 keepalive + sane limits)
When setting up your NGINX front, it’s all about keeping things efficient and smooth. Using HTTP/1.1 with keepalive can make a big difference in how your server handles connections. This helps reduce latency for your users by allowing multiple requests to be sent over a single connection, rather than opening a new one each time.
Here are a few key settings to consider:
- Keepalive Timeout: This setting controls how long to keep the connection open while waiting for new requests. A common value is around 65 seconds, but you can adjust it based on your traffic patterns.
- Limit Connections: To prevent any single user from hogging all your resources, you can set limits on the number of connections. This helps ensure fair access for everyone. Here’s how you might set that up:
server { listen 80; server_name yourdomain.com; location / { # Limit the number of simultaneous connections for each IP limit_conn addr 10; ... } } - Limit Requests: Similarly, you can limit the number of requests a single IP can make in a short period, which can help mitigate abuse:
http { limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s; server { ... location / { limit_req zone=mylimit burst=5; ... } } }
Bonus Tips:
- Keep an Eye on Performance: Regularly monitor your server's performance and tweak your limits as needed. Tools like Grafana or Prometheus can be super handy for this.
- Test Your Configuration: Before rolling out changes, test them in a staging environment to see how they affect your users’ experience.
By fine-tuning these settings, your NGINX front will be in great shape, providing a seamless experience for all your visitors!
upstream reth {
server 127.0.0.1:8545 max_fails=3 fail_timeout=10s;
keepalive 64;
}
server {
listen 443 ssl http2;
server_name rpc.example.com;
client_max_body_size 4m; # cap batch payloads
proxy_read_timeout 65s; # eth_call with large state
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_pass http://reth;
}
}
Next, make sure to set some application-level limits (check out the batching guidance below). (json-rpc.dev)
B) Erigon 3 with standalone rpcdaemon (local mode)
# process 1 - erigon
erigon \
--datadir=/data/erigon \
--private.api.addr=127.0.0.1:9090
# process 2 - rpcdaemon (same host, direct DB access)
rpcdaemon \
--datadir=/data/erigon \
--http.addr=127.0.0.1 --http.port=8545 \
--http.api=eth,debug,trace,net,web3
Deployment Modes and Flags in Erigon 3
According to the Erigon 3 documentation, there are several deployment modes that you can choose from, each with its own set of flags. Here’s a quick rundown to help you get started.
Deployment Modes
Erigon 3 has various deployment modes that let you customize your setup based on your needs. Here are the main ones:
- Full Node: This mode downloads the entire blockchain, ensuring that you have all the data available.
- Archive Node: If you need complete historical data, this is the way to go. An archive node keeps everything from the genesis block onward.
- Light Node: Good for when you want minimal resource consumption. A light node doesn’t store the entire blockchain but can still verify transactions.
Flags
When launching Erigon, you can use several flags to tailor the functionality further. Here’s a handy list of some key flags:
--syncmode: Controls how the node syncs with the Ethereum network. Options includefull,fast, orlight.--rpc: Enables the JSON-RPC server, allowing apps to communicate with your node.--ws: Turns on WebSocket support for real-time data.--cache: Adjusts the amount of memory allocated for caching.
You can mix and match these modes and flags to fit your needs perfectly!
For more detailed info, don’t forget to check out the full documentation on Erigon's site.
C) OP Stack node (execution + rollup RPC)
# op-geth: user-facing RPC + sequencer routing
op-geth \
--http --http.addr 0.0.0.0 --http.port 8545 \
--ws --ws.addr 0.0.0.0 --ws.port 8546 \
--rollup.sequencerhttp=http://sequencer:8550
# op-node: rollup status/service RPC (operator-only)
op-node \
--rpc.addr 127.0.0.1 --rpc.port 9545
Sequencer Routing and Role Separation
Check out the Optimism docs for all the details on sequencer routing and role separation. You can find everything you need here.
D) Solana validator with Geyser gRPC plugin
solana-validator \
--full-rpc-api \
--rpc-bind-address 0.0.0.0 \
--geyser-plugin-config /etc/solana/jito-geyser.json
Geyser Plugin Setup
Alright, let’s get your Geyser plugin set up according to the Jito Foundation project guidelines. We’ll be sticking to the HTTP/WSS defaults as outlined in the Solana docs.
You can find all the details over on GitHub.
Hard‑won best practices we deploy for clients
- Set batch and payload limits at the edge
- Keep the JSON‑RPC batch length in check (think around 20-25) and limit the total body size to about ~1 MB for public endpoints to avoid any abuse. If these limits are crossed, hit them with an HTTP 413 error. (json-rpc.dev)
- Cache only what’s safe
- Only cache
eth_callresponses when the block parameter is a hash (which is immutable) and make sure to include the call data in the key. Avoid caching anything with “latest.” - For
eth_getBlockByNumber, cache it withfull transactions=falsebut only for certain block numbers. Keep in mind the reorg depth when it’s applicable.
- Only cache
3) Go for HTTP/1.1 keep-alive with JSON-RPC and stick to WebSockets for subscriptions
Using JSON-RPC over HTTP is still super common and scales nicely behind regular proxies. It’s best to rely on WebSockets only for subscriptions when necessary. Check it out here.
- Keep sensitive namespaces and Engine API under wraps
- The admin, personal, and debug/tracing namespaces for Geth, Nethermind, and Besu should definitely be kept private. Also, remember that the Engine API (8551) should only be accessible locally and requires authentication. Check out more details at (geth.ethereum.org).
5) Keep an Eye on Key Golden Signals
- Track p50/p95 latency by method, QPS, error codes, node head lag compared to network head, subscription counts, and OS I/O saturation.
- If you're using Besu, make sure to enable Bonsai + Snap; for Reth, it's important to have NVMe delivering over 16K IOPS as recommended by Paradigm. (besu-eth.github.io)
- Design for Client Diversity on Ethereum
- Make sure to run at least two different execution clients in production. This helps minimize the risk of correlated failures and aligns with the guidance from the Ethereum Foundation on diversity. Check it out here: (ethereum.org)
7) Keep an Eye on Chain-Specific Deprecations and Release Trains
- Arbitrum Nitro v3.9.3+ for ArbOS 51 is coming your way on January 8, 2026. Check out the details here.
- Solana 1.18.x/Agave repo is transitioning, so be sure to keep an eye on the changelogs for any RPC removals. You can find them here.
- Starknet RPC versions 0.6 and 0.7 are getting deprecated, so it’s best to standardize on 0.8, 0.9, and 0.10. More info on that can be found here.
- For Sui, they’re enabling gRPC indexing and planning to retire JSON-RPC where it makes sense. Get the scoop here.
Server selection by use case
- Exchange/Wallet Backend (Ethereum): Use Geth and Nethermind, and throw in Reth as a read-only tier for those peak times. This combo gives you reliable performance with plenty of read capacity when you need it. Check it out here.
- NFT/Gaming Reads (Solana): Go for a validator plus Geyser gRPC to stream account updates. And don’t forget to keep HTTP for compatibility. More info can be found here.
- Block Explorer / Analytics (Bitcoin): Set up bitcoind and mempool/electrs with full indexing capabilities. Make sure to place it behind an API gateway with strict usage limits. Dive deeper here.
- Enterprise/Regulatory (Ethereum): For permissioning, Besu is your best bet, paired with Nethermind for RPC. Both of these options come with solid metrics and modern sync features. Check it out here.
- OP Stack Appchain: Use op-geth for user RPC and op-node for operator RPC. Be sure to set up sequencer routing and consider adding a plan for client diversity if it makes sense for you. More details available here.
“New details” operators often miss (that bite later)
- Just a heads-up: Geth’s PathDB really only kicks in for new databases. If you’re dealing with an old hashdb, it’ll stick around unless you go for a full resync--so plan your maintenance windows wisely. (mygit.top)
- When you're using Reth, keep in mind that it won't automatically open HTTP/WS unless you add the right flags; out of the box, only the Engine API is running on port 8551 with JWT for security. Just remember to enable user RPC if you need it! (reth.rs)
- Heads up with Erigon 3--it’s switched things up a bit with defaults (Full node vs Archive). If you need access to the historical state, make sure to set
prune.mode=archiveright at the genesis. (github.com) - Just so you know, the OP Stack doesn’t have a public mempool; when sending transactions, eth_sendRawTransaction has to go through the sequencer. If you miss that step, you might get a “working RPC” that can’t actually handle user transactions. (docs.optimism.io)
- If you’re working with Solana, definitely look into Geyser streams--they help offload validators. Without them, those heavy
getProgramAccountscalls can really throw your validator out of sync when it’s under load. (docs.solanalabs.com) - Lastly, Sui is gearing up to make gRPC the main interface for production--just make sure to enable indexing and manage the rollout carefully to keep everything running smoothly. (docs.sui.io)
Final selection matrix (quick reference)
- If you're looking for maximum production stability right now, go with Geth or Nethermind. Check it out here.
- For the highest RPC throughput with EVM, Reth and Erigon are the champs. Just make sure to validate on your own hardware! More details here.
- Need some enterprise features like permissioning and Java support? Then Besu's your best bet. Get the scoop here.
- Looking at L2 options? You might want to check out op-geth + op-node, or for Arbitrum, Nitro is working with the current ArbOS. Dive into the details here.
- For Solana, using the Agave/Jito validator along with Geyser gRPC is the way to go. Find more info here.
- Bitcoin fans can rely on bitcoind paired with mempool/electrs. Get the latest releases here.
- In the Cosmos ecosystem, start with gRPC, then REST through the gRPC-gateway. And keep that 26657 port private! More details here.
- For Aptos, check out the REST API with strict health and IOPS requirements; Sui is moving towards gRPC. More on that here.
Bottom line
If you’re looking for some reliable “set-and-forget” stability, go with Geth or Nethermind. For those times when you’re really pushing QPS or dealing with heavy trace workloads, consider adding Reth or Erigon into the mix. If you’re working with Solana, definitely check out Geyser gRPC as your go-to scaling solution. When it comes to OP Stack and Arbitrum, make sure you’re up-to-date with the latest releases and keep an eye on your sequencer setup. And a quick reminder across all chains: make sure to codify batch/payload limits and never let sensitive namespaces slip out.
7Block Labs can help you design, measure, and run a mixed-client RPC fabric tailored for your team. With SLAs, handy dashboards, and defined cost limits, you'll be able to deliver faster without any unexpected infrastructure hiccups.
Meta
Updated: January 7, 2026. Key sources: official client docs and the latest release notes for Reth, Geth, Nethermind, Besu, and Erigon; Solana/Agave and Geyser docs; OP Stack/Arbitrum Nitro announcements; Sui/Aptos production insights. (paradigm.xyz)
2026 Buyer’s Guide to Open-Source Web3 Servers
Looking to dive into the world of Web3? You're in the right place! This guide’s got everything you need to know about running servers for Ethereum, Solana, OP Stack, Arbitrum, Bitcoin, Cosmos, Starknet, Sui, and Aptos. We’ve packed it with handy copy-paste configurations, current release quirks, and those crucial hard limits you should set at the edge. Let's get started!
What You’ll Find Here
- Ethereum: Best setups, tools, and configs to get your node running smoothly.
- Solana: Tips for efficient operation and troubleshooting common issues.
- OP Stack: Quick start guides and resources to help you scale effectively.
- Arbitrum: Insights into optimal configurations for performance.
- Bitcoin: Running your Bitcoin node like a pro with all the essentials.
- Cosmos: A look into multi-chain interactions and how to manage them.
- Starknet: Details on deploying contracts and managing resources.
- Sui: Unique considerations for building on Sui.
- Aptos: Best practices to optimize your experience on Aptos.
Key Configurations
Here are some copy-paste configs to help you get set up fast:
Ethereum
# Ethereum node config
{
"network": "mainnet",
"sync_mode": "fast",
"port": 30303,
"rpc": {
"enabled": true,
"port": 8545
}
}
Solana
# Solana validator config
{
"identity": "<YOUR_IDENTITY_KEY>",
"ledger": "/path/to/ledger",
"rpc-port": 8899,
"fullnode": true
}
OP Stack
# OP Stack overview config
{
"network": "op_mainnet",
"rpc": "http://localhost:8545",
"max_gas": 20000000
}
Arbitrum
# Arbitrum node config
{
"network": "Arbitrum One",
"rpc": "http://localhost:8547",
"speed": "fast"
}
Bitcoin
# Bitcoin node config
{
"server": true,
"rpcuser": "<YOUR_USER>",
"rpcpassword": "<YOUR_PASSWORD>",
"port": 8332
}
Cosmos
# Cosmos setup config
{
"chain-id": "cosmoshub-4",
"rpc.listen-address": "tcp://0.0.0.0:26657",
"seeds": "<SEED_NODES>"
}
Starknet
# Starknet node config
{
"rpc": {
"port": 5050,
"provider": "json"
},
"network": "goerli"
}
Sui
# Sui node config
{
"network": "sui-mainnet",
"rpc": {
"enabled": true,
"port": 9101
}
}
Aptos
# Aptos node configuration
{
"network": "mainnet",
"rpc": {
"enabled": true,
"port": 8080
}
}
Release Gotchas
Every platform has its quirks. Here are some current gotchas you might face:
- Ethereum: Watch out for network congestion during major events.
- Solana: Performance may dip during high transaction periods.
- OP Stack: Certain upgrades can introduce breaking changes; keep an eye on release notes.
- Arbitrum: Gas fees can fluctuate, so plan accordingly.
- Bitcoin: Ensure your node stays in sync with the network.
- Cosmos: Interchain communication can be tricky; follow migration guides closely.
- Starknet: Keep up with the latest updates to avoid compatibility issues.
- Sui: Pay attention to resource allocation for optimal performance.
- Aptos: Familiarize yourself with new features in each release for a smoother experience.
Hard Limits at the Edge
Setting hard limits is a must to avoid server overload:
- Ethereum: Limit gas usage to prevent spikes.
- Solana: Set transaction rate limits to maintain stability.
- OP Stack: Don’t overcommit resources; set thresholds.
- Arbitrum: Keep an eye on memory usage to avoid crashes.
- Bitcoin: Monitor bandwidth for RPC calls.
- Cosmos: Define limits on block size to prevent bloating.
- Starknet: Resource limits on contract execution can save you headaches.
- Sui: Tune performance settings for optimal deployment.
- Aptos: Control API rate limits for better management.
Armed with this guide, you're ready to tackle your Web3 server setup like a champ! Get in there and start building!
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

