ByAUJay
Ethereum Node Hardware Requirements and API Performance Tuning
1) What changed in 2025 (and why it matters for sizing)
- May 7, 2025: The Pectra mainnet upgrade rolled out! For operators, this meant getting acquainted with some new quirks in the Engine API and making sure their client versions met the minimum requirements across EL/CL. You can check out more details here.
- December 3, 2025: The Fusaka mainnet upgrade brought along PeerDAS and introduced "Blob Parameter Only" (BPO) follow-ups that boost blob capacity beyond what we had with the original EIP-4844 design of 3-target/6-max. In the short term, this means more rollup blob throughput and new profiles for network/storage on the consensus layer. With BPO1 on December 9, 2025, we're looking at a rise to 10/15 blobs, and BPO2 on January 7, 2026, bumps that up to 14/21. Just a heads up: you’ll need to keep your EL and CL versions up to date. For more info, check this link.
- The basics of EIP-4844 still hold strong: each blob is about 128 KiB, stored on the consensus layer, and will be pruned after roughly 18 days. Operators need to consider the short-lived blob sidecars when planning their CL disk, while EL storage will continue to focus on state/history. You can dive deeper into the details here.
What That Means for Sizing
- So, consensus clients are stepping up their game when it comes to handling blob sampling and serving (thanks to the post-PeerDAS changes). The good news is that not every node needs to download all the blob data anymore. This means the patterns around bandwidth pressure will shift, but the long-term impact on disk space will stay in check since blobs are time-limited. For those using EL nodes, the key factors to keep an eye on are state size, schema (like whether you’re using path-based or the older legacy systems), and if you really need that archive-grade history. You can check out more about this on the Ethereum blog.
2) Minimums vs reality: current hardware baselines by client and node type
Here are the latest figures you’ll want to keep in mind as of December 2025. Make sure to plan for some breathing room beyond these numbers.
- Ethereum.org baseline (general):
- Minimum: 2 TB SSD, 8 GB RAM, 10+ Mbit/s; Recommended: a speedy 2+ TB SSD, 16+ GB RAM, and 25+ Mbit/s. Don't forget to factor in about ~200 GB more for consensus data. Check it out here: (ethereum.org)
- Geth (EL):
- If you're running a snap-synced full node, you're looking at over 650 GB right now, and it's adding about 14 GB every week. To play it safe with your disk space, aim for around 2 TB to skip those annoying offline pruning sessions. It’s a good idea to prune offline every now and then to clear up some space. (geth.ethereum.org)
- There’s a new “path-based” archive mode in version 1.16 and up. This takes up around 2 TB for the complete history, but keep in mind, there are some trade-offs--like the fact that historical eth_getProof isn’t supported yet. On the other hand, the older hash-based (legacy) archive could be a beast, taking up anywhere from 12 to 20 TB. So, pick your mode based on how you plan to handle historical proofs or queries. (geth.ethereum.org)
- Nethermind (EL):
- Recommended specs: For the mainnet, you'll want 16 GB of RAM and at least 4 CPU cores; if you're going for an archive node, aim for 128 GB of RAM and 8 cores. And don’t skimp on the storage--opt for a speedy 2 TB SSD or NVMe drive, with IOPS of at least 10,000 to make syncing and RPC smoother. Check out the full details here.
- Erigon (EL):
- Storage Needs: If you're going minimal, you’ll need about ~350 GB. For the full setup, aim for around ~920 GB, and if you want the archive, you'll be looking at ~1.77 TB for the Ethereum mainnet. Depending on which prune mode you choose, it's recommended to have between 1-4 TB of disk space, and RAM should be somewhere between 16-64 GB. Plus, there’s a separate rpcdaemon that helps with scaling better. You can check out more details here.
- Bandwidth Tips: For non‑staking nodes, a bandwidth of 25 Mbit/s is recommended, while staking nodes should aim for around 50 Mbit/s. You can find the bandwidth requirements here.
- Reth (EL):
- Full storage needs are about 1.2 TB, while the archive comes to around 2.8 TB. For a smooth experience, aim for a stable connection of 24 Mbit/s or higher. The focus here is on speedy TLC NVMe. Check out the details here.
- Here's a quick look at the sizing requirements for consensus clients (CL):
- Nimbus: You’ll need around 200 GB for beacon data. If you want to run it alongside an execution layer (EL), go for a 2 TB SSD and 16 GB RAM on one machine. Check out more details on the Nimbus guide.
- Teku: This one requires 4 cores, 16 GB RAM, and a 2 TB SSD to set up a solid baseline for a combined EL/CL “full node + validator.” You can find all the info you need in the Teku documentation.
Rule of Thumb (Dec 2025)
- For a single-box validator with light RPC, you'll need:
- 8 cores
- 32 GB RAM
- 2 TB TLC NVMe with DRAM
- An additional SSD for backups/OS
- If you're setting up a dedicated RPC that’s read-heavy, it's best to scale out with multiple EL nodes (like Erigon, Reth, or Geth) behind a proxy. Focus on NVMe that offers strong sustained write speeds and low latency rather than just looking for raw capacity. You can check out more details in the Erigon documentation.
3) Storage that won’t bite you at 3 a.m.
- Go for TLC NVMe drives with DRAM cache. Steer clear of QLC, especially during those write-heavy sync phases. Keeping your SSDs cool is crucial to avoid throttling. Nethermind's sync notes and performance guide make a big deal about maintaining sustained write speeds and proper cooling. (docs.nethermind.io)
- When it comes to public cloud options, EBS/PD/Azure SSDs work just fine--just keep their limits in mind:
- AWS EBS gp3 has some solid capabilities: it now offers up to 80k IOPS and 2,000 MiB/s per volume as of September 26, 2025. If you skimp on IOPS or throughput, you might find that snapshotting or state syncing slows down or even stalls. (aws.amazon.com)
- Google Persistent Disk SSD is no slouch either, giving you up to 80k IOPS and 1,200 MiB/s per instance. Just remember that queue depth is key for networked storage--try to aim for 32-128+ outstanding I/Os when you're gunning for those 16k-64k+ IOPS. (docs.cloud.google.com)
- Azure Premium SSD v2 offers up to 80k IOPS and 1,200 MB/s per disk. They provide a baseline of 3,000 IOPS and 125 MB/s for free, so feel free to scale up if you need more. (learn.microsoft.com)
- When it comes to Erigon, it's best not to use ZFS for your database. Instead, you can go with RAID0 across several NVMe drives for improved capacity and throughput, as long as you manage redundancy at a different layer. (docs.erigon.tech)
4) Pruning and state scheme: reclaim space safely
- Geth Offline Pruning (State): First things first, you’ll want to stop your node and then run
geth snapshot prune-state. Just a heads up, this could take hours, so don't expect it to be a quick job. It’s a smart move to prune when your disk hits around 80% capacity; if you wait until it’s 99% full, you might run into trouble. Make sure to keep at least ~40 GB free to avoid any pruning issues. You can check out more details here. - Geth History Pruning (PoW Bodies/Receipts): To lighten the load on your RPC nodes, run
geth prune-historyto get rid of a hefty chunk of pre-Merge history that you probably don’t need anymore. If you’re curious about the specifics, check it out here. - Archive Needs:
- So, do you need to perform historical state queries at random blocks or long-range tracing? If that's the case, you’ll want to opt for the archive option. The new path-based archive method from Geth uses about ~2 TB, but keep in mind there are some caveats for historical proofs. There’s also the legacy hash-based archive available if you need full feature parity, albeit at much larger sizes. More info can be found here.
5) JSON‑RPC throughput: 10 knobs that actually move the needle
- Choose the Right EL for Your Workload
- Erigon’s setup is great for handling heavy historical queries and lots of concurrent requests. To make the most of it, run the rpcdaemon separately and assign it to dedicated CPU cores. Check out more details here.
- Reth is all about speedy call throughput, thanks to its modern Rust internals. Just make sure your disk is sized to at least 1.2 TB for full data and around 2.8 TB for archives. You can find more info here.
- Geth is still the most commonly used option. Take advantage of the new path-based state when you can and don’t forget to plan your pruning routines to keep your sizing in check. More about it can be found here.
2) Transport choice matters
- When it comes to communication, go for HTTP for those quick, stateless requests. But if you're looking to keep a steady stream going--like logs or updates on new heads--WebSocket is the way to go. Geth has your back with support for HTTP, WebSocket, and IPC; just make sure to only expose what you actually need. Check it out here: (geth.ethereum.org)
3) Batch--but within server limits
- If you're using Geth, keep in mind that it has default settings:
BatchRequestLimitis set to 1000, and theBatchResponseMaxSizeis 25,000,000 bytes. If your requests go over this limit, they might fail or get throttled. So, make sure to adjust your client-side split size as needed. Check out the details here. - For those working with Nethermind, you'll want to look at
JsonRpc.MaxBatchSize, which defaults to 1024, andMaxBatchResponseBodySize, set at 32 MiB by default. Make sure to set these limits explicitly to keep your node safe and sound. More info is available here.
- eth_getLogs can cost you--so paginate smartly
- The best approach here is to work with smaller block windows and focus on narrowing down your topics. A lot of providers set limits between 1k and 10k block ranges. For instance, Besu has a built-in hard cap of 1000 logs via
--rpc-max-logs-range. On the other hand, Nethermind allows you to set a cap on logs per response withJsonRpc.MaxLogsPerResponse, which by default is set to 20000. So, when you’re designing your pagination, make sure to do it at the SDK layer. You can find more information here.
- Tracing Safely
- Be cautious with
debug_trace*calls, as they can cause nodes to stall, especially if you’re not using archive nodes or if you're trying to access blocks from way back in the past. It’s best to stick with dedicated archive nodes and use scoped tracers. Also, make sure to turn off the capture of memory, storage, and stack unless it’s absolutely necessary. Just a heads up: responses can be pretty massive! (geth.ethereum.org)
- Geth cache flags: what to keep in mind for 2025
- Since Geth has integrated Pebble and path-based state, a lot of the caching has shifted away from Go's garbage collection. So, if you were used to cranking up
--cachefor better pruning, that’s not the case anymore; it won’t impact pruning or DB size like it used to. It's best to stick with the default settings and only bump it up a bit if you've really seen an improvement. Check out more details here.
7) EL-CL Engine API Isolation
- Make sure to keep the authenticated Engine API (default port 8551) under wraps. Configure that JWT secret properly on your EL/CL setup, and only bind it to localhost--firewall everything else! You can check out the JWT secret parameters in the Geth/Teku/Nethermind documentation. (geth.ethereum.org)
8) Peer Count and Snap Tuning (Nethermind)
To boost your snap/state sync, you can tweak Network.MaxActivePeers and Network.MaxOutgoingConnectPerSec. For example, cranking the outgoing connections up from the default of 20 to around 50 can speed things up. Just keep in mind that if you set these rates too high, your ISP might start throttling your connection. So, use these settings wisely. Check out the details in the Nethermind docs.
9) RPC Role Separation and Proxies
- It's a good idea to keep “ingest/sync” separate from “serve RPC.” For Erigon, you can run
erigonandrpcdaemonas two distinct processes. If you're using Geth or Nethermind, think about setting up a read-only RPC node that sits behind Nginx or HAProxy. This setup lets you take advantage of HTTP keep-alive, connection reuse, and limits on server-side request bodies. Check out the details in the Erigon documentation.
10) Keep an Eye on the Right Metrics
- Geth: Just turn on
--metricsand scrape data from the Prometheus endpoint at/debug/metrics/prometheus. It's a good idea to keep tabs on things like p2p traffic, how quickly blocks are being inserted, and RPC batch metrics with your dashboards. Check out more details here. - Nethermind: For this one, keep track of
nethermind_json_rpc_requests,_errors,_bytes_sent/received, and some Engine metrics (like fork choice and newPayload execution time). This way, you can catch any saturation issues right off the bat. More info can be found here.
6) Practical examples (apply these today)
A) Validator + light JSON‑RPC on one machine (cost-effective and resilient)
- Hardware: You’ll need at least 8 cores, 32 GB of RAM, and a 2 TB TLC NVMe with DRAM cache, plus a solid uplink of 25-50 Mbit/s. Don’t forget to add a UPS! (ethereum.org)
- Software: Go with Geth or Reth for the execution layer (EL), and pair it with Teku or Nimbus for the consensus layer (CL). Keep the Engine API running on localhost, and make sure to expose the RPC only on your LAN with the least‑privilege namespaces. (docs.teku.consensys.io)
- Maintenance: Remember to run
geth snapshot prune-stateevery month, and every now and then, you might need to do ageth prune-historyif you notice your storage space creeping up. Set up alerts for when you hit 70% disk usage, and take action if you hit 80%. (geth.ethereum.org)
B) Read-heavy public RPC (app/API backend)
- Topology: Aim for 3 to 5 EL nodes sitting behind HAProxy or Nginx. If you want to boost call throughput, go for Erigon (with the rpcdaemon running out of process) or Reth. Just remember to keep those debug, txpool, and admin namespaces turned off. Check out the details here.
- Settings:
- Geth: Make sure to respect the
BatchRequestLimitof 1000 and capBatchResponseMaxSizeat 25 MB. At the proxy level, enforce client-side pagination foreth_getLogs. Find more info here. - Nethermind: Set
JsonRpc.MaxBatchSize(keep it ≤1024) and limitJsonRpc.MaxLogsPerResponse(like 5000-10000) to help manage that blast radius. For more details, head over here.
- Geth: Make sure to respect the
- Storage: You'll want around 2 TB of NVMe per node. Keep an eye on write amplification while you're catching up, and don’t forget to keep those SSDs cool--Nethermind sync can be pretty I/O-intensive. If you’re using cloud disks, make sure to provision enough IOPS on EBS gp3; starting with at least 8-16k and scaling up to 30-60k during peak times is a good plan. More on that here.
- Queries: Always segment your
eth_getLogsby block range and topics. If you're dealing with wider windows, log indexing off-chain is the way to go. Just a heads up: Besu suggests a hard limit of 1000 blocks per query, so it's smart to mirror that in your API. Get the scoop here.
C) Deep Analytics and Full Historical Tracing
- Choose Your Archive:
- Erigon Archive: About 1.77 TB, great for quick historical reads.
- Geth Path-based Archive: Roughly 2 TB, but watch out for limitations on historical Merkle proofs. If you really need that
eth_getProoffunctionality, you’ll want to go for the legacy hash-based archive, which can be 12-20+ TB. (docs.erigon.tech)
- Tracing on Worker Nodes: Make sure to keep your public RPC isolated. Tweak your timeouts and use tracers that skip memory and stack capture unless you really need it. This will help keep your payloads manageable. (therpc.io)
7) Consensus‑layer specifics post‑EIP‑4844 and Fusaka
- So, with EIP‑4844, blobs hang out on the Consensus Layer (CL) for about 4096 epochs, which is roughly 18 days. Even before we had PeerDAS, this kept things on the CL disk pretty manageable. But now with PeerDAS (aka Fusaka), nodes will sample the blob data instead of downloading everything. This change helps with bandwidth and storage while allowing for a higher blob capacity through BPO forks. You can check out more details here.
- Here are some capacity constants to keep in mind:
- Blob size: 4096 field elements × 32 bytes = 128 KiB.
- Before Fusaka, the target/max per block was 3/6; but after Fusaka, BPOs will progressively increase these limits (10/15 then 14/21). You can expect to see higher L2 data rates without any growth in EL disk space. For more info, take a look here.
- On the operational side: make sure your CL (whether it's Lighthouse, Teku, Nimbus, or Prysm) is running the client version recommended by the Ethereum Foundation for the active fork. And don’t forget to keep an eye on REST/metrics when those BPO ramp-ups kick in. Check out more on this blog post.
8) OS and network hygiene for RPC nodes
- Make sure to keep those file descriptor limits nice and high (think around 100k) and go for HTTP keep-alive at the proxy.
- For the Linux network stack, crank up the listen backlog (
net.core.somaxconnshould be set to at least 1024-4096) and adjust the SYN backlog to manage bursts of traffic. Don't forget to set reasonable TCP buffer ceilings. It's a good idea to test this out in staging--these are just solid high-throughput webserver tweaks, but they work well for JSON-RPC too. You can check out more details in the performance tuning guide. - Try to co-locate your Prometheus exporters and set up alerts for things like: RPC errors per second, P95/P99 method latency (specifically for
eth_call,eth_getLogs, anddebug_trace*), disk busy time, and if the CL head lag goes over 1 slot. Both Geth and Nethermind have some great metrics you can plug into Grafana. Check it out here.
9) Quick client‑by‑client tuning checklist
- Geth
- If you’re launching a new deployment and want to keep things tidy, go for path-based state. Just remember to set up regular
snapshot prune-stateandprune-historyto manage your history. - Stick to the batch limits and keep your
--http.apito the essentials; it’s a good idea to avoid exposing the Engine API. (geth.ethereum.org)
- If you’re launching a new deployment and want to keep things tidy, go for path-based state. Just remember to set up regular
- Nethermind
- Adjust the
Network.MaxActivePeersandMaxOutgoingConnectPerSecsettings carefully to speed up snap and block imports. Also, tweakJsonRpc.MaxBatchSizeandMaxLogsPerResponseto keep your RPC safe. - If you're more focused on RPC throughput instead of validator latencies, you might want to turn off block pre-warm by setting
Blocks.PreWarmStateOnBlockProcessing=false. Check it out here: (docs.nethermind.io)
- Adjust the
- Erigon
- Be sure to pick your
--prune.mode(minimal/full/archive) wisely; it's a good idea to runrpcdaemonas a separate process when you're scaling up; and give ZFS a pass. (docs.erigon.tech)
- Be sure to pick your
- Reth
- Make sure your disk is sized at least 1.2 TB (TLC NVMe). If you’re running heavy APIs, it’s a good idea to set up multiple stateless RPC nodes behind a proxy and warm up your OS caches using realistic load profiles. Check out the details here: (reth.rs)
10) Sane starting BOMs (bare metal)
- "All-in-one validator + light RPC"
- CPU: 8C/16T modern x86
- RAM: 32 GB ECC
- Disk: 2 TB TLC NVMe with DRAM (sustained ≥10k IOPS), plus a 500 GB SSD for the OS/logs
- Network: Wired connection with 25-50 Mbit/s; don’t forget a UPS!
- Software: Geth/Reth + Teku/Nimbus; Prometheus/Grafana
- Maintenance: Monthly pruning; keep it updated with patches on EF release schedule. (ethereum.org)
- Public RPC cluster (read-heavy)
- 3 EL nodes (Erigon or Reth) each decked out with: 8-16 cores, 32-64 GB RAM, and 2 TB NVMe storage.
- Using HAProxy or Nginx for connection pooling, setting request size limits, and adding per-IP rate limits; plus, we've got forced pagination for
eth_getLogs. - For the cloud setup: go with a single 4-8 TB consolidated EBS gp3 (IOPS ≥20-40k) for each node or multiple smaller NVMe drives; just make sure the queue depth at the OS/driver is beefy enough while syncing. (aws.amazon.com)
11) Common failure patterns we see (and how to avoid them)
- “My node is synced, but RPC keeps timing out when I try to scan wide logs.”
- Cause: you’re likely dealing with massive
eth_getLogswindows. To fix this, try capping your range (typically between 1k-5k blocks), narrowing down your topics, and running things in parallel; keep in mind that Besu usually defaults to 1000. (besu.hyperledger.org)
- Cause: you’re likely dealing with massive
- “Disk seems okay… but then sync just plummets.”
- Cause: thermal throttling and the low sustained write speeds of typical consumer SSDs. Solution: switch to enterprise-grade TLC NVMe drives with heat sinks; keep an eye on SSD temperatures and throttling points; for cloud setups, boost the provisioned IOPS. (docs.nethermind.io)
- “Archive queries are speedy, but calling eth_getProof for older blocks doesn’t work.”
- Reason: The path-based archive in Geth currently doesn’t handle historical proofs. Solution: Use the legacy hash-based archive or switch to Erigon for reading historical states (just keep in mind that proof needs might still need the hash-based approach). (geth.ethereum.org)
12) Final guidance for decision‑makers
- Treat your EL disk as the main deal; make sure to use TLC NVMe with some extra headroom, and set up a routine for pruning (Geth) or using prune modes (Erigon). Check it out here: (geth.ethereum.org)
- Keep things tidy: run your RPC nodes separately, keep the Engine API under wraps, limit batch sizes and log ranges, and set up alerts for JSON‑RPC error rates and latency issues. More info can be found here: (geth.ethereum.org)
- Stick to the Ethereum Foundation's recommended client versions during each fork (Pectra → Fusaka → BPO phases). Don’t forget to plan for those quick maintenance breaks; mismatches between CL and EL can lead to downtime faster than you think. Learn more at: (blog.ethereum.org)
If you're looking for 7Block Labs to validate your specific workload, we can totally replay your production RPC mix against a selected client matrix (think Geth/Erigon/Reth; and Nethermind when it's applicable). After that, we’ll whip up a tailored BOM and proxy policy that includes measured P95/P99 latencies and safe concurrency limits for each method.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Hardware and RPC Requirements for Running an Ethereum Node
> Summary: If you want to get your Ethereum nodes up and running smoothly in 2026, the quickest way is by prioritizing NVMe storage. It's all about choosing the right client combinations based on your needs--whether that's full, archive, or tracing nodes--and making sure your RPC is secured behind a proxy. This guide brings together the latest insights and references to help you navigate this process.
ByAUJay
Best Practices for Ethereum Smart Contracts in 2026: How to Implement Them Effectively
**Summary:** By 2026, “good” Ethereum smart contracts have transformed quite a bit. Pectra (May 7, 2025) boosted blob throughput, which made calldata-heavy transactions pricier, and rolled out EIP‑7702 smart EOAs. Plus, we've got Dencun-era opcodes like MCOPY and transient making waves in the space.
ByAUJay
Getting a Grip on EIP-7691 Cost Curves: What App Teams Need to Know About Blob Fee Changes
> Summary: EIP-7691 has increased Ethereum’s blob capacity and revamped the fee update curve, making the way blob base fees change from block to block a bit different. This guide is designed to help decision-makers wrap their heads around the new cost dynamics and effectively gauge volatility.

