7Block Labs
Ethereum

ByAUJay

Ethereum Full Node Disk Size 2026, Ethereum Full Node Storage Requirements 2026, and Ethereum Full Node Size 2026

Short version: As of January 2026, running a production‑grade Ethereum full node typically consumes 0.9–1.3 TB for the execution client plus 80–200 GB for the consensus client, with an additional 100–150 GB for blobs. Plan for at least a fast 2 TB NVMe today; use 4 TB if you run heavy RPC, archives, or want multi‑year headroom. (ethereum.org)


Why this matters to decision‑makers in 2026

Storage is the hidden bottleneck for Ethereum nodes. Since Dencun/Deneb (EIP‑4844) added blob data and 2025’s partial history expiry (EIP‑4444’s first step) changed what “full history” means on the execution layer, the “right” disk size depends on your client, pruning settings, and whether you serve historical queries. The details below aggregate the most recent, client‑specific figures so you can size correctly on‑prem or in cloud. (prysm.offchainlabs.com)


What changed since last year

  • Blobs (EIP‑4844) add a rolling ~18‑day window of extra data on consensus clients. Typical extra footprint is roughly 100–150 GB, pruned automatically after the retention period unless you override it. (prysm.offchainlabs.com)
  • Partial History Expiry (PHE) landed across execution clients on July 8, 2025, letting you drop pre‑Merge block bodies and receipts, commonly reclaiming 300–500 GB on existing nodes. This makes 2 TB NVMe viable again for most full nodes. (blog.ethereum.org)
  • Geth’s path‑based archive mode (v1.16+) shrank “archive” to about ~2 TB for full state history, with the tradeoff that eth_getProof works only for the latest ~128 blocks; use legacy hash‑based archive if you need historical proofs. (geth.world)

Current execution‑layer (EL) disk sizes in 2026

Expect numbers to creep up through 2026; these are the latest vendor‑maintained snapshots.

  • Geth

    • Fresh, full node: ~1.2 TiB; with pre‑Merge history expired: ~830 GiB; typical growth ~7–8 GiB/week on current versions. Offline pruning and history‑prune tools are available. (ethdocker.com)
    • Older baseline guidance you may still see: snap‑synced node >650 GB growing ~14 GB/week, prunable back to ~650 GB. Useful as a lower bound, but superseded by newer snapshots above. (geth.ethereum.org)
    • Path‑based archive: ~1.9–2.0 TB for full history; not suitable if you need eth_getProof beyond the recent window. Hash‑based archive can exceed 12–20 TB. (geth.world)
  • Nethermind

    • Fresh mainnet full node database often lands around 0.9–1.1 TB; with pre‑Merge expiry enabled, operators report ~740 GiB footprints. Nethermind recommends fast SSD/NVMe with ≥10,000 IOPS; 2 TB is the comfortable single‑box choice. (docs.nethermind.io)
    • Guidance includes automatic full‑pruning triggers; keep ≥250 GB free to ensure pruning completes. (docs.nethermind.io)
  • Erigon (v3)

    • Full: ~920 GB; Minimal: ~350 GB; Archive: ~1.77 TB (Ethereum mainnet). Recommend 2–4 TB depending on mode; 16–64 GB RAM guidance with strong NVMe. (docs.erigon.tech)
  • Besu

    • Snap (pruned) with Bonsai DB: ~805 GB; unpruned: ~1.16 TB; archive around 12 TB. (besu.hyperledger.org)
  • Reth

    • Full: ≈1.2 TB; Archive: ≈2.8 TB. Includes a handy db stats CLI to audit table‑level sizes. (reth.rs)

Quick reality check: ethereum.org’s general guidance still says 2 TB SSD minimum, 2+ TB recommended; assume ~200 GB extra for consensus data. That holds in 2026 if you also budget for blob storage (see next section). (ethereum.org)


Consensus‑layer (CL) disk sizes, plus blobs

  • Beacon databases (no slasher):
    • Teku ≈84 GiB; Lighthouse ≈130 GiB; Prysm ≈130 GiB; Nimbus ≈170 GiB. These are recent snapshots; CL clients continue to optimize/prune over time. (ethdocker.com)
    • Nimbus explicitly recommends budgeting ~200 GB for the beacon DB and reiterates that a 2 TB SSD is a sensible “both EL+CL” baseline on one machine. (nimbus.guide)
  • Blobs (EIP‑4844):
    • Default retention 4,096 epochs (~18 days). Storage impact depends on blob load; budget ~100–150 GB. In Prysm, you can change path and retention with --blob-path and --blob-retention-epochs. (prysm.offchainlabs.com)
  • Slasher (optional):
    • Resource‑hungry. Prysm’s slasher can push toward 1 TB on mainnet; not recommended for most home or single‑box setups. (prysm.offchainlabs.com)

What a “typical” 2026 full node looks like (single box)

  • Execution client (Geth/Nethermind/Erigon/Besu): 0.8–1.3 TB depending on client and whether pre‑Merge history is expired.
  • Consensus client: 80–200 GB.
  • Blobs: 100–150 GB (default retention).
  • OS, logs, headroom for compactions/pruning: 200–300 GB free.

Actionable sizing: a 2 TB TLC NVMe with DRAM cache remains viable for validators and light RPC. Use 4 TB if you have heavy RPC, want multi‑year headroom without periodic pruning, or plan to experiment with archive features. (ethereum.org)


Enabling and operating history expiry (PHE) today

  • Geth
    • Existing node: stop geth, run geth prune‑history --datadir <path>, restart. New nodes can skip pre‑Merge by starting with the appropriate history option per docs. Expect hundreds of GB reclaimed. (geth.ethereum.org)
  • Nethermind
    • History expiry is on by default on supported networks; disable via Sync.AncientBodiesBarrier=0 and Sync.AncientReceiptsBarrier=0 if you must keep all history. Era1 archives provide the old data outside your live DB. (docs.nethermind.io)
  • Besu
    • Provides both offline and online pre‑Merge prune flows; for offline, run besu ... storage prune‑pre‑merge‑blocks and then a one‑time --history‑expiry‑prune. (blog.ethereum.org)
  • Ecosystem context
    • Partial history expiry is the first step toward EIP‑4444; a future “rolling window” is planned but not yet finalized. Budget storage assuming pre‑Merge expiry now, and be ready for a rolling window later. (eips.ethereum.org)

Pruning and housekeeping that actually moves the needle

  • Geth state pruning
    • Run offline prune periodically (geth snapshot prune‑state). Do it before the disk is >80% full; reserve ≥40–50 GB free during the operation. (geth.ethereum.org)
  • Nethermind full pruning
    • Use Pruning.Mode Hybrid and trigger by free‑space or state‑DB size. Keep at least 250 GB free so full pruning can complete safely; otherwise you may be forced to resync. (docs.nethermind.io)
  • Erigon modes
    • Consider Minimal (~350 GB) for EL‑only tasks and conservative disk budgets; use Full (~920 GB) for general RPC; Archive (~1.77 TB) only if you truly need historical state queries. (docs.erigon.tech)
  • Reth visibility
    • Use reth db stats to see exactly which tables dominate your footprint and confirm pruning effects. (reth.rs)

Disk type: what works vs. what pages you at 3 a.m.

  • Do
    • Use TLC NVMe with DRAM cache; keep SSDs cool; aim for ≥10k IOPS baseline for EL. This shows up in client docs and field experience; too‑slow disks stall syncs and cause reorg pain. (docs.nethermind.io)
  • Avoid
    • DRAM‑less/QLC consumer SSDs and SATA SSDs for production EL after Dencun; multiple operators report slow syncs and premature wear. Community‑maintained lists track which models fare better. (gist.github.com)

Cloud specifics: provision IOPS and throughput, not just GB

  • AWS EBS gp3 now scales up to 80k IOPS and 2,000 MiB/s per volume (since Sept 26, 2025). Under‑provisioned IOPS is a common root cause of “why is my node slow” tickets. (aws.amazon.com)
  • Google Persistent Disk SSD can reach 100k read and 50–100k mixed IOPS per VM depending on vCPU count; ensure your machine type can drive the disk. (docs.cloud.google.com)
  • Azure Premium SSD v2 offers up to 80k IOPS and 1,200 MB/s per disk, with 3,000 IOPS/125 MB/s baseline “free.” Match VM caps to disk caps. (azure.microsoft.com)

Practical tip: if you’re running validators and RPC on one VM, start around 16k–32k provisioned IOPS for the EL volume and tune from there during the initial sync week.


Concrete sizing recipes (January 2026)

  • Validator + light JSON‑RPC on one box
    • 8 cores, 32 GB RAM, 2 TB TLC NVMe with DRAM, wired network. Expect EL 0.8–1.2 TB, CL 80–150 GB, blobs ~100–150 GB, plus headroom. Prune EL monthly or set automatic pruning where supported. (ethereum.org)
  • Read‑heavy RPC (no tracing)
    • Separate EL and CL volumes or hosts. EL on Erigon Full (~920 GB) or Nethermind/Geth with PHE; 4 TB NVMe advised for headroom and compactions under load; front with a proxy and cache hot methods. (docs.erigon.tech)
  • Historical/analytics workloads
    • Geth path‑archive (~2 TB) if you don’t need historical eth_getProof; otherwise plan for legacy archives 12–20 TB or use a managed archive provider. Consider sharded “time‑sliced” archives. (geth.world)

Configuration snippets operators actually use

  • Geth: prune pre‑Merge history on an existing node
    • Stop geth, then: geth prune‑history --datadir=/path/to/datadir. (geth.ethereum.org)
  • Geth: offline state prune
    • geth snapshot prune‑state (reserve ≥40–50 GB free; don’t interrupt). (geth.ethereum.org)
  • Nethermind: enable hybrid pruning triggered by free space
    • --Pruning.Mode Hybrid --Pruning.FullPruningTrigger VolumeFreeSpace --Pruning.FullPruningThresholdMb 256000. (docs.nethermind.io)
  • Prysm: direct blob storage to a separate volume and tweak retention
    • --blob‑path=/mnt/blobs --blob‑retention‑epochs=6000 (default is 4096; higher means more disk). (prysm.offchainlabs.com)
  • Reth: audit DB sizes
    • reth db stats --detailed‑sizes. (reth.rs)

Growth and headroom planning

  • Expect EL DB growth in the high single‑digit GiB/week on current clients; plan headroom for compactions/pruning and for blob‑storage variance. If you keep a 2 TB volume ~60–70% full, you’ll have safe space for pruning and surges. (ethdocker.com)
  • If you run Nethermind, set pruning thresholds so full pruning kicks in before you dip below ~250 GB free; otherwise a resync may be the only escape hatch. (docs.nethermind.io)

Common pitfalls we still see in 2026

  • Mixing archive expectations with full‑node hardware. If your product needs historical proofs or deep tracing, a normal full node won’t do—budget for path‑archive or legacy archives, or buy archive access. (geth.world)
  • Assuming consensus clients are “disk‑free.” Beacons consume 80–200 GB and blob retention adds another 100–150 GB; cleaning only the EL won’t fix a CL‑filled disk. (ethdocker.com)
  • Choosing SATA SSDs or DRAM‑less/QLC NVMe for EL. Post‑Dencun write patterns expose their weaknesses; you’ll get sync stalls and rapid wear. (gist.github.com)
  • Under‑provisioning cloud IOPS. If gp3 is at baseline (3k IOPS/125 MB/s), your initial sync can crawl or fail. Provision up and dial back post‑sync. (aws.amazon.com)

Outlook for 2026–2027

  • Execution‑layer history: after the July 2025 PHE milestone, client teams are working toward rolling history windows as envisioned by EIP‑4444. Expect defaults to converge further; plan automation around pruning and snapshots. (eips.ethereum.org)
  • Archives: Geth’s path‑based archives have made “archive” viable on 2 TB, with a caveat on eth_getProof. If you require proofs at arbitrary heights, either keep a legacy hash‑based archive (12–20 TB) or split responsibilities across nodes. (geth.world)

Bottom line: what to buy and deploy in January 2026

  • For most validators and light‑to‑moderate RPC: a single 2 TB TLC NVMe with DRAM cache, paired with a modern 8‑core CPU and 32 GB RAM, remains the sweet spot. Enable PHE, schedule EL pruning, and keep ≥20% free space. (ethereum.org)
  • If you need heavier RPC or long headroom: move to 4 TB NVMe and consider Erigon Full or Geth/Nethermind with aggressive pruning. In cloud, start at 16k–32k IOPS for the EL volume and scale down after sync. (docs.erigon.tech)
  • If you need historical state at scale: use Geth path‑archive on ~2 TB if historical proofs aren’t required; otherwise budget double‑digit TB or use a managed archive provider. (geth.world)

Sources and further reading

  • Official client docs and hardware pages (disk sizes, pruning, history expiry, archive modes): Geth, Nethermind, Erigon, Besu, Reth, ethereum.org. (geth.ethereum.org)
  • Consensus clients and blob storage notes: eth‑docker resource usage matrix; Prysm blobs guide; Nimbus requirements. (ethdocker.com)
  • Partial history expiry announcements and status: EF blog, EIP‑4444, implementation plans. (blog.ethereum.org)
  • SSD model realities and operator experience: community maintained “Great and less great SSDs for Ethereum nodes.” (gist.github.com)
  • Cloud IOPS/throughput baselines to avoid slow syncs: AWS, GCP, Azure docs. (aws.amazon.com)

7Block Labs helps teams pick the right client mix, automate pruning and snapshots, and design storage topologies that won’t wake you at 3 a.m. If you’d like a second pair of eyes on your 2026 node plan, we’re happy to review configs and produce a sizing BOM for your workload.

Like what you're reading? Let's build together.

Get a free 30‑minute consultation with our engineering team.

Related Posts

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2025 7BlockLabs. All rights reserved.