7Block Labs
Ethereum Technology

ByAUJay

Ethereum Validator Hardware Requirements and Ethereum RPC Dedicated Nodes for High-Throughput Workloads


Why this guide

Decision-makers typically have two distinct questions that often get mixed up:

  • What kind of hardware do I need to run an Ethereum validator without hiccups?
  • How should I set up dedicated Ethereum RPC nodes to handle heavy read and tracing workloads effectively?

These roles each come with their own set of challenges and potential pitfalls. Here, we’ll break them down, consider the new realities post-Dencun, and provide some real numbers tailored for our clients along with effective blueprints designed for success in 2026.


What changed recently: blobs, bandwidth, and storage

  • Dencun (Deneb/Cancun) went live on the mainnet on March 13, 2024, bringing along EIP‑4844 and its cool “blob” transactions. So, what’s a blob? Think of them as temporary data sidecars primarily for Layer 2s; they help cut down on L2 fees while slightly bumping up the bandwidth and storage needs of the consensus layer. You can read more about it here.
  • Each blob comes in at 128 KB, and the protocol aims for 3 blobs per block, with a maximum of 6. They stick around for about 4096 epochs, which is roughly 18 days, and they add around 48 GiB of rolling storage on average--up to about 96 GiB at peak. Just a heads up, this is a change for the consensus layer; the execution storage footprint hasn’t really changed. For more details, check out the docs.
  • According to the EIP‑4844 spec, the worst-case scenario for additional bandwidth per block is under ~0.75 MB. But in reality, the sustained load is much lower since blobs don’t stick around for long compared to execution history. You can dive deeper into the specifics here.

Implication: Validators these days need a little extra bandwidth and around 50-100 GiB of additional CL disk space for blob sidecars. However, the main factors for sizing are still your execution client's resource usage and RPC patterns.


Role 1: Ethereum validator node (consensus + execution)

A validator machine has to run two main components:

  • An execution client (EL), which could be something like Geth, Nethermind, Besu, Erigon, or Reth.
  • A consensus client (CL), and for that, you’ve got options like Teku, Lighthouse, Prysm, Nimbus, or Lodestar.

You’ll want to make sure you’ve opened up the P2P ports for both EL and CL so they can connect smoothly. Here’s the default info you need:

  • EL: Use 30303 for TCP/UDP (works with Geth, Nethermind, Besu); just a heads-up, Erigon might also use 30304 in some setups.
  • CL: Go with 9000 for TCP/UDP (compatible with Lighthouse, Teku, Nimbus, and Lodestar); on the flip side, Prysm sticks to 13000/TCP and 12000/UDP.

If you need more details, check out the full scoop at docs.ethstaker.org.

Current, cited hardware baselines (2025-2026)

  • According to Ethereum.org, if you're thinking about running a node, you'll need at least a 2+ TB SSD and a minimum of 8 GB RAM. For a smoother experience, they suggest bumping it up to a 2+ TB fast SSD, 16+ GB RAM, and a connection speed of at least 25 Mbit/s. Depending on your client type, the disk space requirements will vary:

    • Besu: ~800 GB for a snap
    • Geth: ~500 GB for a snap
    • Nethermind: ~500 GB for a snap
    • Erigon/Reth are more for archives, with around ~2.5 TB (Erigon) and ~2.2 TB (Reth) needed. Plus, don't forget to add about 200 GB for consensus data. You can find more details on this here.
  • Now, let's talk specifics for Geth: a snap-synced full node will take up more than 650 GB, and it's growing at about 14 GB each week. So, planning around 2 TB is smart to keep from having to prune too often. If you're going the archive route, it's a hefty >12 TB using the traditional “hash-based” method, but pruning can help get your full node back down to around 650 GB. More info is available here.
  • For Nethermind, if you're running a mainnet full node, 16 GB of RAM and four cores should do the trick. If you're going for an archive, aim for 128 GB and eight cores. As for the disk, make sure to use a fast SSD or NVMe; 2 TB should keep you comfortable for both mainnet and CL. Check out their system requirements here.
  • On the Reth side of things, a full node will typically use around 1.2 TB, while their archive needs about 2.8 TB. Standard RAM requirements are between 8 to 16 GB, and it’s best to have stable bandwidth of at least 24 Mbps. More details can be found here.
  • Lastly, for Besu: it generally runs with a snap and pruned setup, needing around 800 GB with Bonsai for sync time and disk usage. The minimum JVM requirement is 8 GB, and if you’re a validator, using NVMe is definitely recommended. You can read up on this here.

Consensus Side:

  • Teku suggests a solid baseline for validators: you’ll want at least 4 cores running at 2.8 GHz, 16 GB of RAM, and an SSD with 2 TB of free space. Check out the details here.
  • If you're feeling adventurous, you can run Lighthouse’s slasher. It’s not a must-have, but keep in mind it will need about 256 GB of SSD space in addition to some extra CPU and RAM. It’s more suited for the pros. For more information, head over to the Lighthouse book.

Blob sidecars (post‑Dencun)

  • You should plan for about an extra ~48 GiB on average (with a maximum of around ~96 GiB) in rolling CL storage for blobs. The bandwidth increase is pretty mild, sitting at around tens of KB/s sustained, but it’s smart to give yourself some extra space just in case. (docs.teku.consensys.net)

Practical Interpretation:

  • If you're setting up a single mainnet validator today, a solid and reliable setup would be around 4-8 CPU cores, 32 GB of RAM, and between 2-4 TB of TLC NVMe storage. You'll also want at least 50 Mbps down and 25 Mbps up. Now, if you’re thinking of running a slasher, maxing out your peers, or handling heavy local RPC, it's a good idea to bump up your RAM and disk space.
  • There’s a proposed EIP‑7870 that recommends going for 4 TB of NVMe and 32-64 GB of RAM to give you some extra future flexibility, but remember, it’s still just a draft suggestion. Check it out here: (eips.ethereum.org).

Execution-client specifics you should care about

  • Geth Pruning and Databases:

    • If you're running Snap-sync full nodes, expect them to grow around 14 GB every week. It’s a good idea to run “offline prune” every now and then with this command: geth snapshot prune-state. Also, if you're dealing with older PoW bodies, “history pruning” is an option, so make sure to check your version. You can find more details here.
    • You can use Pebble by adding --db.engine=pebble to your command. Geth has been moving towards a more streamlined, path-based state system, where built-in pruning is becoming the norm and the cache flag isn’t as crucial as it once was. For more info, check out this link.
  • Nethermind Performance Knobs:

    • If you have 32 GB of RAM or more, consider enlarging the pruning and RocksDB buffers. And if you’re rocking 128 GB or even 350 GB, you can switch to MMAP/no-compression profiles for quicker response times, especially for RPC or attestation tasks. These are all documented tunables, so you can dive deeper here.
  • Erigon:

    • Erigon is really efficient when it comes to storage. As of now, the repo notes show about 1.1 TB for full and roughly 1.6 TB for archive on the mainnet with Erigon 3 in 2025. If you need trace RPC, just use --http.api to enable trace. You can check out their GitHub page here.
  • Reth:

    • Reth provides clear guidance for full/archive disk setups and gives you the option of using history indexes. You can enable account and storage history indexing to speed up specific RPC access patterns. Get all the details here.

Network and ports checklist

  • Make sure to forward EL 30303 TCP/UDP and your client’s default ports. For Lighthouse/Teku, that's 9000 TCP/UDP; for Prysm, it’s 13000 TCP and 12000 UDP. This will help you get those healthy peer counts! Check out the details here.

MEV‑Boost (PBS) considerations for validators

  • MEV-Boost is a game-changer for validators, letting them snag blocks from competitive builders through relays. It's pretty popular because it helps boost returns. If you're using a client like Teku, just set it up with --builder-endpoint or fire up mev-boost to handle multiple relays at once. Just a heads-up: make sure you get familiar with liveness risks and have a local fallback in place. Check out the details here.
  • You can find everything you need to know about the proposer flow and relay interactions. This includes steps like registering your validator, getting the header, and submitting that blinded block. It’s a good idea to keep multiple relays in play and use your client’s “circuit breaker” fallback. This way, if those relays act up, your node can still propose locally. For more info, head over to this link: documentation.

Two validated hardware profiles (2026)

  • Minimal-but-safe validator (solo, 1-2 keys):

    • You’ll need a setup with 4 cores, 32 GB of RAM, and 2 TB of TLC NVMe storage (make sure it’s not QLC!). Aim for at least 50/25 Mbps for your internet connection, and don’t forget a UPS and forwarded ports. You can go with either Geth+Teku or Nethermind+Lighthouse for your software. It's a good idea to plan for a monthly prune with Geth and have an extra 50-100 GiB set aside for blob sidecars. (geth.ethereum.org)
  • Enterprise validator (multiple keys, MEV‑Boost, dashboards):

    • For a more robust setup, think about having 8-16 cores, 64 GB of RAM, and a mirrored 4 TB TLC NVMe. Dual NICs and redundant power are also great features to include. You’ll want to run a remote signer, like Web3Signer, with a slashing protection database and activate builder endpoints across multiple relays. (docs.web3signer.consensys.net)

Role 2: Dedicated Ethereum RPC nodes for high‑throughput workloads

A validator machine isn’t meant to handle all your high-QPS API needs. When you're dealing with heavy RPC calls like eth_call, eth_getLogs, or debug_/trace_, it’s best to have a separate, scaled fleet just for those.

Workload taxonomy

  • Read-only transactional APIs: Check out eth_getBlockBy…, eth_getTransaction…, eth_getBalance, and eth_call for accessing data without modifying anything.
  • Event scanning: Use eth_getLogs to scan events over large ranges, and for real-time updates, there’s subscription streaming with WebSocket through eth_subscribe.
  • Tracing and forensics: Dive deep into transaction details with debug.traceTransaction, and make use of trace_* for filtering and replaying transactions.
  • Mempool/watchers: Keep an eye on pending transactions with txpool_*, and don't forget about WebSocket subscriptions for live updates. Just a tip--only adjust the txpool settings on the boxes that really need it. (geth.ethereum.org)

Why client choice matters for RPC

  • Geth

    • Super stable and plays nice with lots of things. Its GraphQL endpoint lets you bundle multiple fields into one query, which means fewer round trips and less hassle for those complex dashboards. Check it out here.
  • Nethermind

    • This one’s got strong tracing support with debug_ and trace_. Plus, they have some handy tips for tuning performance if you've got a lot of RAM (like file warmers, bigger pruning caches, and RocksDB options). Dive into the details here.
  • Erigon

    • Erigon is all about efficiency with a lean footprint and a top-notch trace namespace (trace_callMany, trace_block, trace_filter, etc.). It also includes ots_ APIs to speed things up for Otterscan. Don’t forget to use --http.api eth,erigon,trace if you’re on dedicated tracing nodes! Learn more here.
  • Reth

    • A modern Rust client that’s pretty speedy! The official guidance says it uses about ~1.2 TB for a full node and ~2.8 TB for an archive. It’s got configurable indexing stages (like account and storage history) and clear documentation on how pruning can affect RPC availability. A solid choice for low-latency read APIs. More info here.
  • Besu

    • A reliable JVM client that sets you up with a snap+pruned default and a footprint of around ~800 GB. If you're aiming for high-throughput RPC, they suggest NVMe for the best performance. Get the scoop here.

Check out the RPC spec and test suite: the go-to Execution JSON‑RPC is standardized and has been put through conformance testing. Make sure to use the specified methods for better portability. (ethereum.github.io)

Architecture patterns that scale

  • Separate pools per capability:

    • Read pool: Set up 2-3 Reth or Geth nodes behind an L4/L7 load balancer (HTTP+WS), using WebSocket sticky sessions for subscriptions.
    • Logs/scan pool: Use either an Erigon archive or Reth with history indexes that are optimized for eth_getLogs and range scans.
    • Trace pool: Go for Erigon with the trace namespace enabled. It's a good idea to keep this pool separate so that tracing spikes won’t mess with your standard read latency. (docs.erigon.tech)
  • WebSockets vs. GraphQL: Stick to WebSockets for any subscription or event workloads, and use GraphQL when you can take advantage of the “one query/one round trip” approach. Geth’s docs for WS and GraphQL will walk you through the needed flags. (geth.ethereum.org)
  • Keep the Engine API private: Make sure your consensus and execution communicate through the authenticated Engine API on localhost:8551 with a JWT secret. It’s crucial not to expose this publicly. (geth.ethereum.org)
  • Peer counts: Set a reasonable cap on --maxpeers for your RPC boxes to minimize P2P noise. Validators, on the other hand, can handle higher peer counts for better resilience. (geth.ethereum.org)

Concrete hardware for a high‑throughput RPC box (per node)

  • CPU: Shoot for 8-16 cores with a solid base clock speed; Memory: Aim for 32-64 GB to keep things smooth.
  • Storage: Go for TLC NVMe (skip QLC), with either 4 TB for archiving or 2 TB for full setups. Just make sure you’ve got plenty of free space to dodge those pesky SSD performance drops. Keep in mind that NVMe latency is way more important than just the flashy IOPS numbers. Nethermind really emphasizes the importance of response time and IOPS sensitivity. Check it out here.
  • Network: For busy public endpoints, a 1 Gbps connection is a must; private enterprise clusters can typically get by with 100-500 Mbps, depending on what kinds of workloads you’re running.

Example: a three‑tier RPC cluster

  • Tier A (read hot path): You’ll want 3 full Reth nodes (each with 1.2 TB), using HTTP and WebSocket for connections. These nodes should be set up to heal automatically. Only turn on account/storage history indexes if your product really needs to know, “what block did this key change?” For more info, check out reth.rs.
  • Tier B (logs/indexing): Set up 2 Erigon archive nodes with the --http.api eth,erigon,trace setting. If you have your own block explorer, make sure to include Otterscan support too. You can find more details on this in the Erigon documentation.
  • Tier C (deep traces): For those deep tracing needs, go for 2 Erigon tracing boxes on separate subnets. These will help you handle those spur-of-the-moment trace_replayBlockTransactions and debug_traceTransaction requests. Just remember to manage your request budgets and batch windows wisely! More info is available at Erigon's site.

If your dashboards could use some single-query aggregation magic, consider adding a tiny Geth GraphQL node. It might just give you the boost you need! You can check out the details over at geth.ethereum.org.

Tuning that actually moves the needle

  • Geth

    • Go for snap sync; it’s a good idea to use the OS cache and stick with the default cache splits. Don’t forget to prune history and state offline regularly. If you’re checking out Pebble or the path-scheme, it’s best to resync with a fresh datadir and leave those old cache habits behind. (geth.ethereum.org)
  • Nethermind

    • If your system has plenty of memory, try out the ≥32 GB or ≥128 GB profiles. Make sure to enable the file warmer. For those ultra-RAM setups (≥350 GB), switching to MMAP/no‑compression profiles can really help reduce CPU usage per request. (docs.nethermind.io)
  • Erigon

    • Keep the RPC daemon namespaces light on each host. Only enable trace on the machines that really need it. Archive mode should be considered only if your product features demand it. (docs.erigon.tech)
  • Reth

    • Take some time to understand the trade-offs when it comes to pruning; disabling pruning for sender, tx lookup, receipts, and history will knock out those historical RPC calls. So, make sure to plan your indexes and retention based on your endpoints' SLAs. (reth.rs)

Method placement strategy

  • Use eth_call, eth_getBalance, and eth_getTransaction* on "read" nodes.
  • For eth_getLogs wide-range scans and all trace/debug operations, stick to isolated "heavy" nodes.
  • Make sure to run the txpool on a dedicated mempool watcher node since the txpool namespace isn’t standard and can be pretty resource-heavy. (geth.ethereum.org)

Security and reliability patterns

  • Remote signing: Set up a dedicated Web3Signer with a slashing protection database for your consensus keys. This setup allows you to switch between execution and consensus clients without the risk of double-signing. Check out the details here.
  • MEV‑Boost liveness: Always make sure to configure multiple relays and turn on your client’s circuit breaker fallback for local block building. It's a good idea to test for relay outages in a staging environment. More info can be found here.
  • Keep Engine API private: It's super important to keep your Keep Engine API under wraps; don’t expose port 8551 beyond your host or cluster boundary. Learn more here.
  • Monitoring: Take advantage of client dashboards! Geth’s Grafana panels are great for tracking P2P ingress/egress, txpool saturation, and peer health. You should also consider setting up SLOs around p50/p95 RPC latency for each method family. For further reading, head over here.

Port, bandwidth, and blob planning quick math

  • Ports to open/forward:

    • EL: 30303 TCP/UDP
    • CL: 9000 TCP/UDP (for most setups) or 13000/TCP + 12000/UDP (if you're using Prysm) (docs.ethstaker.org)
  • Blob storage (rolling):

    • Average: 3 blobs × 128 KB × 32 blocks/epoch × 4096 epochs ≈ 48 GiB
    • Max: 6 blobs × 128 KB × 32 × 4096 ≈ 96 GiB
    • Consider this extra space in your CL setup. (docs.teku.consensys.net)
  • Bandwidth:

    • Aim for at least 50/15 Mbps for validators and a minimum of 100 Mbps for RPC boxes serving the public. The addition of blobs isn’t too demanding compared to P2P and RPC traffic, but it’s good to have some extra bandwidth just in case. (geth.ethereum.org)

“Do this, not that” validator checklist (2026)

  • Do:

    • Go for a TLC NVMe drive that's at least 2 TB; plan on doing some monthly pruning if you're running a full Geth setup; set aside an extra 50-100 GiB for blobs; and keep your OS and clients up to date. (geth.ethereum.org)
    • It’s a good idea to run both a consensus client and an execution client on the same machine, and make your Engine API private with a JWT secret. (geth.ethereum.org)
    • Don't forget to forward those P2P ports and check your peers; if you have poor peering, it can really mess with attestation inclusion and synchronization. (docs.ethstaker.org)
    • If you’re using MEV‑Boost, make sure to set up multiple relays along with a circuit breaker fallback. (docs.flashbots.net)
  • Don’t:

    • Avoid exposing JSON‑RPC on your validator host to the public internet; also, steer clear of running heavy trace/debug processes there.
    • Don’t rely on QLC SSDs or networked/capped disks for your state DBs; those latency spikes can lead to missed duties. (docs.nethermind.io)

Example build recipes

1) Solo validator (quiet home/office, 1-4 validators)

  • CPU: 6C/12T
  • RAM: 32 GB
  • Disk: 2 TB TLC NVMe
  • Clients: Geth + Teku, MEV‑Boost with 2-3 relays, plus a UPS and 4G/5G failover
  • Maintenance: Remember to prune Geth every month and check the CL blob store room (around 50-100 GiB) after Dencun. (geth.ethereum.org)

2) Enterprise Validator (Dozens of Keys)

  • CPU: 8-16 cores
  • RAM: 64 GB
  • Disk: Mirrored 4 TB TLC NVMe
  • Clients: Nethermind + Lighthouse; Web3Signer with slashing protection DB; MEV-Boost across multiple relays; on-box Prometheus/Grafana. (docs.nethermind.io)

3) High-throughput RPC stack (internal product APIs)

  • Read: We're running 3 Reth full nodes (each with 1.2 TB), using HTTP+WS with sticky WS connections. Our p95 targets are under 100 ms, keeping things speedy.
  • Logs/traces: We’ve got 2 Erigon archive nodes with trace enabled, all set up for isolated autoscaling to handle our needs.
  • Dashboards: Just one Geth node over here, but it’s powered by GraphQL for those slick multi-field queries.
  • We keep the Engine API strictly private and have load balancer health checks tailored to each method class. Check it out here!

Final notes on client selection and diversity

The network thrives when operators mix things up with their EL and CL clients. For those enterprise fleets, try to intentionally distribute across at least two ELs and two CLs. Keep your validator machines simple and straightforward; save the risky performance tweaks and heavy APIs for separate RPC nodes.

If you're looking to turn the info above into a solid migration plan, 7Block Labs has got you covered. They can analyze your specific method mix--whether it’s eth_call, logs, or trace--and help you figure out the perfect client blend and storage profile to meet your SLAs.


References

  • Check out the current guidance for running a node on Ethereum.org, along with the client disk sizes. Don’t forget to add about 200 GB for the Consensus Layer. (ethereum.org)
  • If you’re working with Geth, make sure you're aware of the hardware requirements, pruning options, database choices, as well as GraphQL and RPC transports, plus some info on peer limits and txpool metrics. (geth.ethereum.org)
  • Dencun and EIP‑4844 are on the horizon! Look into blob size, retention, and bandwidth requirements to get prepared. (blog.ethereum.org)
  • Nethermind has its own set of system requirements, and don’t forget to consider performance tuning, like pruning cache, RocksDB, and MMAP. You can find more details here. (docs.nethermind.io)
  • For those using Reth, check out the system requirements and how pruning and indexing can affect RPC. (reth.rs)
  • Erigon’s system requirements feature some cool trace namespace usage, so make sure to dive into that. (github.com)
  • If you're all about Besu, be sure to look at their system requirements and the default settings for snapshots. (besu.hyperledger.org)
  • Need help with ports and forwarding? The EthStaker and Prysm docs have got you covered with some handy guidance. (docs.ethstaker.org)
  • Lastly, get the lowdown on MEV‑Boost, including relay APIs, risks, and circuit breaker info. It’s all essential reading! (boost.flashbots.net)

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.