7Block Labs
Ethereum

ByAUJay

Ethereum.org Run a Node Hardware Requirements 2026 and Base Node Requirements for New Validators

Short summary: A 2026 field guide to sizing, buying, and operating Ethereum nodes and validator infrastructure after Fusaka/PeerDAS and the Blob Parameter Only (BPO) increases. It compiles the latest ethereum.org guidance, client docs, and EIPs into concrete specs, build recipes, and operational checklists for teams planning production-grade nodes.


What changed in late‑2025/early‑2026 (and why your hardware plan must, too)

  • Fusaka activated on mainnet on December 3, 2025, introducing PeerDAS so nodes no longer download every blob; Ethereum then scheduled “Blob Parameter Only” (BPO) steps that raise blobs per block. BPO1 raised the target/max blobs to 10/15 on December 9, 2025; BPO2 raises to 14/21 on January 7, 2026. This materially shifts consensus-layer bandwidth/storage profiles while keeping blob data ephemeral. (blog.ethereum.org)
  • EIP‑4844 continues to govern blob data: blobs are ~128 KiB each and must be available for roughly 4096 epochs (~18 days), after which clients may prune them. Operators should provision short‑lived storage for blob sidecars on the consensus side. (eips.ethereum.org)

What this means in practice:

  • With a 14‑blob target, a back‑of‑the‑envelope upper‑bound for additional temporary blob data served by consensus clients is ~12.9 GiB/day (128 KiB × 14 blobs × 7200 slots/day). Over an 18‑day retention window: ~230–240 GiB. Real usage varies with actual blob load and sampling, but you should budget this headroom on top of your consensus database. (Derived from EIP‑4844 parameters and BPO schedule.) (eips.ethereum.org)

The authoritative baseline: ethereum.org’s 2026 node specs

According to the ethereum.org “Run a node” page (crawled within the last week):

  • Minimum (single machine running EL+CL):

    • CPU: 2+ cores
    • RAM: 8 GB
    • SSD: 2 TB
    • Bandwidth: 10+ Mbit/s
  • Recommended:

    • CPU: fast 4+ cores
    • RAM: 16 GB+
    • SSD: fast 2+ TB (NVMe preferred)
    • Bandwidth: 25+ Mbit/s
  • Execution-layer client disk usage (indicative, snap/full archive):

    • Besu: ~800 GB+ (snap), ~12 TB+ (archive)
    • Geth: ~500 GB+ (snap), ~12 TB+ (archive)
    • Nethermind: ~500 GB+ (snap), ~12 TB+ (archive)
    • Erigon: no snap; full pruning possible (~2 TB)
    • Reth: no snap; archive ~2.2–2.8 TB, full ~1.2 TB class
  • Consensus layer: typically add ~200 GB for beacon data (slasher optional increases). (ethereum.org)

Tip: These numbers are systemwide estimates. Always size with 30–50% headroom for growth, client upgrades, and extra indexes for APIs.


Execution clients in 2026: precise storage and pruning realities

  • Geth

    • Snap‑synced full node still lands >650 GB and grows ~14 GB/week. Periodic offline prune returns usage to the baseline; plan >2 TB to avoid emergency pruning cycles. (geth.ethereum.org)
    • New path‑based archive mode (v1.16+) compresses “archive” down to around ~1.9–2.0 TB for full history but does not serve historical eth_getProof beyond the recent window; legacy hash‑based archive can exceed 12–20+ TB. Choose mode by proof/query needs. (geth.ethereum.org)
  • Nethermind

    • Suggested mainnet full: 16 GB RAM / 4 cores; archive: 128 GB / 8 cores. Disk: at least 2 TB fast SSD/NVMe; 10k+ IOPS recommended for sync and RPC stability. (docs.nethermind.io)
  • Reth

    • Ethereum mainnet full: ~1.2 TB; archive: ~2.8 TB; stable 24 Mbps+ recommended. Emphasizes high‑quality TLC NVMe. (reth.rs)
  • Erigon

    • Common footprints for Ethereum mainnet today range from sub‑1 TB full to ~1.7–3.5 TB archive depending on version and prune mode; NVMe strongly preferred. Check current docs for your chosen release. (erigon.gitbook.io)

Practical takeaway: For a long‑lived production EL, a 2–4 TB TLC NVMe with DRAM cache is the sweet spot; avoid QLC and low‑IOPS cloud disks for state‑heavy workloads. The community‑maintained “Great and less great SSDs for Ethereum nodes” list is a useful sanity check when selecting models. (gist.github.com)


Consensus clients in 2026: bandwidth, blobs, and DB sizing

  • Teku minimums for a combined full node (EL+CL) remain: 4 cores @2.8 GHz, 16 GB RAM, 2 TB SSD, broadband ~10 Mbps+. Use a UPS. (docs.teku.consensys.net)
  • Nimbus runs lean but still recommends 2 TB SSD + 16 GB RAM when co‑hosted with an execution client. (nimbus.guide)
  • eth‑docker’s resource snapshots show typical beacon DB sizes on the order of ~80–170 GiB across consensus clients, exclusive of transient blob sidecars. Use this as a starting point for monitoring/alerting thresholds. (ethdocker.com)

Blob‑era planning:

  • After BPO2 (14/21), budget roughly 230–240 GiB of additional short‑lived CL storage for blob sidecars to cover an 18‑day horizon at target usage; more if you plan supernode modes that retain additional data for network resilience. Validate your client’s pruning defaults and set alerts for blob storage growth. (Derived from EIP‑4844 parameters and Lighthouse supernode documentation.) (eips.ethereum.org)

The 2026 “base node” for new validators: what’s the safe floor?

Pure minimums will work but leave little margin for reorgs, blobs, and growth. The ecosystem has converged on stronger baselines, now also codified in an in‑progress EIP that turns “folk wisdom” into crisp recommendations with PassMark guidance:

  • Full node (EL+CL, no validator duties): 4 TB NVMe, 32 GB RAM, 4c/8t CPU (~1000 ST / 3000 MT PassMark), 50/15 Mbps. (eips.ethereum.org)
  • Attester/validator (MEV‑Boost typical): 4 TB NVMe, 64 GB RAM, 8c/16t (~3500 ST / ~25,000 MT), 50/25 Mbps. (eips.ethereum.org)
  • Local block builder (if you build locally instead of using relays): 4 TB NVMe, 64 GB RAM, 8c/16t, 100/50 Mbps. (eips.ethereum.org)

Why this matters now: Fusaka’s PeerDAS plus rising blob throughput changes validator bandwidth sensitivity, especially for proposer/builder roles. If you use MEV‑Boost and ever need to fall back to local block building, the higher bandwidth tier can be the difference between timely propagation and missed value. (eips-wg.github.io)


Practical build recipes (Bills of Materials you can actually order)

Note: Exact models evolve quickly; the key is the class of component. Use TLC NVMe with DRAM cache and high endurance (TBW), and verify with the “good SSDs” list when in doubt.

  1. Home/SMB validator + light RPC
  • CPU: 8c/16t desktop‑class with strong single‑thread (PassMark ST ~3500+)
  • RAM: 32–64 GB DDR4/DDR5
  • Storage: 2–4 TB TLC NVMe (DRAM), plus a second 1–2 TB SSD for OS/backups
  • Network: 50/25 Mbps or better, wired Ethernet
  • Power: UPS sized for at least 15–30 minutes
  • Rationale: Big enough for EL+CL, validator, MEV‑Boost, and modest RPC without paging or choking during blob spikes. For SSD model selection, consult the community drive list; avoid QLC and DRAM‑less models. (eips.ethereum.org)
  1. Split EL/CL with remote signer (higher resilience)
  • EL box: 8c/16t, 32 GB RAM, 2–4 TB NVMe; CL box: 4–8c, 16–32 GB RAM, 1 TB NVMe (+ blob overhead)
  • Remote signer: Web3Signer or equivalent with Postgres slashing DB; modest CPU/RAM (Web3Signer often <2 GB heap even at scale)
  • Why: Reduces blast radius; signer isolates keys; clean failover between beacon nodes is safer. (docs.web3signer.consensys.io)
  1. Data‑center “local builder” node
  • CPU: 8c/16t+ server‑class with high ST score
  • RAM: 64–128 GB
  • Storage: 4 TB+ TLC/enterprise NVMe (consider RAID1)
  • Network: 100/50 Mbps or better, low‑latency uplink
  • Why: Builder workloads benefit from CPU and bandwidth; EIP‑7870 captures the higher recommended tier. (eips.ethereum.org)

SSD selection pro tip: Operators routinely report SSD latency/IOPS determines sync stability more than nominal sequential throughput. The community gist includes DRAM/TLC models that consistently work under client write patterns (e.g., WD Red SN700, Seagate FireCuda 530). (gist.github.com)


Client choices and state management: 2026 nuance that saves outages

  • Geth users: schedule offline prune to keep snap‑synced nodes around ~650 GB; with a 1 TB disk you must prune monthly or move to 2 TB+. If you need full historical queries, consider path‑based archive mode (~2 TB) but note eth_getProof limits for old blocks. (geth.ethereum.org)
  • Nethermind: target 10k+ IOPS SSDs and 2 TB capacity for EL+CL on one machine; archive requires heavy RAM (128 GB). (docs.nethermind.io)
  • Reth: competitive “full” ~1.2 TB and ~2.8 TB archive footprints with emphasis on TLC NVMe; suitable for both home and DC. (reth.rs)
  • Consensus DBs: monitor growth and set alerts based on eth‑docker snapshots for your client; renew blob sidecar retention assumptions post‑BPO2. (ethdocker.com)

Client diversity: Always pair minority clients when reasonable; check current distributions before choosing. The canonical overview and links live at clientdiversity.org and ethereum.org’s client diversity explainer. (clientdiversity.org)


Networking and ports: get peering right on day one

Forward and allow the P2P ports for both the execution and consensus client so you can actually find peers and serve the network:

  • Execution: 30303 TCP/UDP (Geth/Besu/Nethermind; Erigon also uses 30304 in some configs)
  • Consensus: 9000 TCP/UDP (Lighthouse/Teku/Nimbus/Lodestar), Prysm uses 13000 TCP and 12000 UDP
    Lock down JSON‑RPC/REST/metrics to localhost or via SSH tunnel/VPN. Don’t expose them publicly. (docs.ethstaker.org)

If you use containers/automation, validate which ports are bound to the host vs internal network to avoid accidental exposure. (docs.slingnode.com)


Validator operations: 2026‑grade reliability patterns

  • Use MEV‑Boost? The booster itself is lightweight, but ensure the validator has bandwidth headroom (50/25 Mbps recommended; more if falling back to local building). (docs.flashbots.net)
  • Remote signing with slashing protection:
    • Web3Signer requires Java 21+ and a Postgres slashing DB; scale horizontally behind a load balancer if running many keys. (docs.web3signer.consensys.io)
    • Configure slashing protection properly; if Lighthouse handles slashing locally for some keys and Web3Signer for others, follow docs to avoid double protection conflicts. (docs.web3signer.consensys.io)
  • Proposer‑only beacon nodes: Lighthouse supports splitting proposer/attester roles to reduce DoS risk around proposals; consider this architecture at scale. (lighthouse-book.sigmaprime.io)
  • DVT (Distributed Validator Technology): If you’re clustering validators (Obol, SSV), size machines with at least 16–32 GB RAM and 2–4 TB NVMe, and mind the additional network/IOPS overhead and port matrix. (docs.obol.org)

Uptime target: you want near‑continuous connectivity; failures during high‑blob epochs can snowball. Redundant ISP or a 5G failover plus a UPS is cheap insurance.


“Base” clarification: L2 Base nodes ≠ Ethereum validators

  • Base (Coinbase’s OP Stack L2) does not have L1‑style validators you can run; Base nodes are execution nodes syncing the L2 chain. If you need one for data/indexing, Reth’s Base profile calls for 2 TB (full) or 4.1 TB (archive) and unusually high RAM (128 GB+) on Base, as of mid‑2025 docs. That’s very different from Ethereum mainnet footprints. (reth.rs)

If your goal is to validate Ethereum itself (earn consensus rewards), follow ethereum.org’s solo staking/launchpad path and run EL+CL+VC on Ethereum mainnet. (ethereum.org)


Blob‑era capacity math you can reuse in planning

Use this quick planner to estimate consensus‑layer transient storage for blobs:

  • Per block at target: blobs_per_block × 128 KiB.
  • Per day: previous × 7200 blocks/day.
  • Retention window: previous × 18 days (per EIP‑4844).

Examples:

  • BPO1 (10 target): ~8.8 GiB/day; ~160 GiB over 18 days.
  • BPO2 (14 target): ~12.9 GiB/day; ~232 GiB over 18 days.

This is not a permanent footprint and PeerDAS reduces the need to fetch all blob data, but operators should still budget local disk and bandwidth for peak conditions. (eips.ethereum.org)


Go‑live checklist for decision‑makers

  • Hardware
    • TLC NVMe with DRAM, ≥2 TB for EL+CL, 4 TB if you want fewer upgrades; endurance ≥1,000 TBW preferred for heavy RPC. (gist.github.com)
    • RAM: 32 GB baseline; 64 GB if running validators + local builder or multiple clients.
    • CPU: Prioritize single‑thread performance; target EIP‑7870 PassMark tiers for your role. (eips.ethereum.org)
  • Network
    • Forward EL 30303 TCP/UDP and CL 9000 TCP/UDP (or Prysm 13000/12000). Keep RPC/REST/metrics private. (docs.ethstaker.org)
    • Validate 50/25 Mbps+ for validators; 100/50 Mbps for local builders. (eips.ethereum.org)
  • Software
    • Choose a minority client combo when feasible; verify current diversity dashboards before committing. (clientdiversity.org)
    • If using Geth, plan prune windows or adopt path‑based archive if you need history without 12–20 TB disks. (geth.ethereum.org)
  • Security/Resilience
    • Remote signer with Postgres slashing DB; consider proposer‑only BN and DVT for larger fleets. (docs.web3signer.consensys.io)
    • UPS on every box; optional second ISP or LTE/5G failover.

A note on growth and timing

  • History expiry and other post‑Pectra/Fusaka cleanups continue to evolve node footprints; keep an eye on EF blog posts for “interfork” adjustments. (blog.ethereum.org)
  • Blob capacity increases are now configuration‑driven (BPOs). Ensure both EL and CL are on the releases cited by EF for the live network to avoid consensus mismatches. (blog.ethereum.org)

Bottom line recommendations for 2026 deployments

  • For a new validator planning a 2–3 year horizon, start at 8c/16t, 64 GB RAM, and 4 TB TLC NVMe with a 50/25 Mbps+ link; split EL/CL and add a remote signer as you scale. This aligns with current EIP‑7870 guidance and the blob‑era realities. (eips.ethereum.org)
  • If you expect heavy RPC or occasional local block building, move to 100/50 Mbps and treat SSD latency/IOPS as a first‑class SLO; select drives from proven operator lists. (gist.github.com)
  • Reassess storage quarterly: Geth’s path‑based archive, Reth’s compact archives, and Erigon pruning all keep footprints manageable—but only if you proactively choose the right mode and schedule maintenance. (geth.ethereum.org)

With these concrete specs, modern client modes, and blob‑era adjustments, your team can make confident 2026‑grade procurement and architecture decisions—and avoid midnight pages from a full disk or a stalled sync.

Like what you're reading? Let's build together.

Get a free 30‑minute consultation with our engineering team.

Related Posts

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2025 7BlockLabs. All rights reserved.