ByAUJay
Ethereum API and Ethereum.org “Run a Node” Hardware Requirements in 2026
What changed for running Ethereum nodes after Dencun and 2025–2026 Fusaka/BPO upgrades, and what that means for your hardware, bandwidth, and API architecture in 2026. This guide distills the latest requirements from ethereum.org and client teams into concrete specs and production patterns for startups and enterprises.
TL;DR for decision‑makers
- Ethereum.org still lists a 2 TB SSD, 8–16 GB RAM, and 10–25+ Mbit/s as the baseline, but 2026 realities (higher blob targets after Fusaka/BPO and heavier API workloads) push production-grade full nodes toward 4 TB NVMe, 32+ GB RAM, and 50/15+ Mbit/s—especially for validators and in-house RPC. (ethereum.org)
- Blob throughput increases (PeerDAS + BPO forks) raise consensus-layer bandwidth and short-term storage needs; plan 160–224 GiB extra for blob retention alone at the new targets. See the math below. (blog.ethereum.org)
1) What’s new in 2026: PeerDAS, Fusaka, and BPO forks
- The 2025–2026 “Fusaka” upgrade (Fulu consensus + Osaka execution) activated on mainnet on December 3, 2025, then rolled out two Blob Parameter Only (BPO) bumps that lifted per‑block blob targets and max limits:
- BPO1 (Dec 9, 2025): target/max → 10/15
- BPO2 (Jan 7, 2026): target/max → 14/21
These are pre-configured in client releases, so no separate client upgrade is needed for each BPO—but you must be on Fusaka‑ready versions. Expect higher bandwidth and transient storage pressure on consensus clients. (blog.ethereum.org)
- PeerDAS (data-availability sampling) means typical full beacon nodes sample and store only parts of blob data. For full blob retrieval post‑Fusaka, Lighthouse introduces explicit modes (
/--supernode
) or very high custody thresholds; operators should not assume Beacon APIs will return full blobs by default any more. (github.com)--semi-supernode
Implication: If your business depends on historical blob reads (e.g., L2 analytics) via Beacon APIs, allocate more disk and bandwidth and explicitly run a “supernode/semi‑supernode” mode where applicable—or shift blob archival to specialized infra. (github.com)
2) Ethereum.org’s 2026 baseline and how to interpret it
Ethereum.org “Run a node” (updated late 2025) lists:
- Minimum: 2+ core CPU, 8 GB RAM, 2 TB SSD, 10+ Mbit/s.
- Recommended: 4+ cores, 16+ GB RAM, 2+ TB fast SSD, 25+ Mbit/s.
- Execution client disk (snap or equivalent/pruned) rough sizes: Geth 500 GB+, Nethermind 500 GB+, Besu 800 GB+, and archive ≈ 12 TB+ (Erigon/Reth have different modes; see client notes below). Add ≈200 GB for beacon data in typical configurations. (ethereum.org)
Reality check for 2026:
- With higher blob targets, consensus workloads consume more bandwidth and short‑term disk than in early 2024. Treat ethereum.org figures as absolute minimums; plan headroom. EIP‑7870 recommends a production “Full Node” at 4 TB NVMe, 32 GB RAM and 50/15 Mbps. (eips.ethereum.org)
3) Precise client‑by‑client storage and RAM today
Execution clients (Mainnet, current documentation snapshots):
- Geth
- Snap‑synced full node: >650 GB (grows ≈14 GB/week; pruning resets to ~650 GB).
- Archive: “full” archive >12 TB (legacy hash-based); newer modes vary by feature support.
- Plan 2 TB to avoid aggressive maintenance. (geth.ethereum.org)
- Nethermind
- Suggested mainnet: 16 GB RAM / 4 cores; archive: 128 GB / 8 cores.
- Disk: “as of Oct 2024” combined EL+typical CL ≈2 TB; Nethermind itself ≈1 TB. (docs.nethermind.io)
- Erigon (v3 docs)
- Minimal ≈350 GB, Full ≈920 GB, Archive ≈1.77 TB (execution DB only; select prune modes).
- Recommends 1–4 TB NVMe depending on mode; 16–64 GB RAM. (docs.erigon.tech)
- Reth (as of 2025‑06‑23)
- Full ≈1.2 TB; Archive ≈2.8 TB; stable 24+ Mbps advised; emphasizes TLC NVMe. (reth.rs)
Consensus clients (beacon data only, excluding blobs):
- Nimbus guidance: ~200 GB disk for a full beacon node; when co‑hosting EL+CL, 2 TB SSD + 16 GB RAM is a workable baseline. (nimbus.guide)
- Teku practical full‑node+validator baseline: 4 cores, 16 GB RAM, 2 TB SSD. (docs.teku.consensys.net)
Lightweight option: Nimbus Consensus Light Client uses <1 MB disk, trades ~15 s head lag and weaker security assumptions—useful for non‑validating EL nodes that just need a Beacon counterpart to run. Not for validators. (nimbus.guide)
4) How blobs change storage math in 2026 (with real numbers)
Constants from EIP‑4844:
- Blob size = 4096 field elements × 32 bytes = 131,072 bytes = 128 KiB.
- Retention ≈4096 epochs ≈131,072 blocks ≈ ~18 days on the beacon node (temporary). (eips.ethereum.org)
Storage math for kept blobs (target blobs per block × 128 KiB × 131,072 blocks):
- Pre‑Fusaka target 3 → 3 × 128 KiB × 131,072 ≈ 48 GiB (matches Teku’s estimate). (docs.teku.consensys.io)
- BPO1 target 10 → ~160 GiB.
- BPO2 target 14 → ~224 GiB.
Practical planning:
- Add 160–224 GiB above your usual beacon storage budget for 2026, depending on when your region hit BPO2 and your client’s storage overhead (indexes, metadata). If you opt in to “supernode”/full‑blob availability, budget more. (github.com)
5) Bandwidth you actually need (validators, builders, full nodes)
EIP‑7870 distills field tests into concrete bandwidth targets:
- Full node: 50 Mbps down / 15 Mbps up.
- Attester (validator): 50 / 25 Mbps.
- Local block builder (if you build payloads yourself): 100 / 50 Mbps. (eips.ethereum.org)
Why the step‑up in 2026:
- Post‑Fusaka BPOs raise blob counts, increasing gossip bandwidth; client releases (e.g., Lighthouse v8) explicitly warn about bandwidth increases as blob target/max rises. (github.com)
6) The Ethereum API surface in 2026: what to expose and how
Think of four layers of API, each with different trust and performance profiles:
- Execution JSON‑RPC (public‑facing via your proxy)
- Canonical spec lives in the Execution APIs repo; ethereum.org’s reference highlights important conventions including the “safe” and “finalized” block tags. Use these tags deliberately in read paths that need probabilistic or final safety. (ethereum.github.io)
- Geth transports: HTTP, WS, and IPC—enable only what you need, and isolate public HTTP behind a reverse proxy that rate‑limits and method‑filters. (geth.ethereum.org)
- Erigon’s rpcdaemon supports method allowlisting (e.g.,
), a best practice for public endpoints. (docs.erigon.tech)--rpc.accessList=rules.json
- Engine API (CL↔EL only; never public)
- Default port 8551, secured by a JWT secret shared between EL and CL. Example (Besu):
. Geth’s equivalent flags are--engine-rpc-port=8551 --engine-host-allowlist=localhost,127.0.0.1 --engine-jwt-secret=jwt.hex
. Keep this bound to localhost or a private interface. (besu.hyperledger.org)--authrpc.port 8551 --authrpc.jwtsecret <path>
- Beacon API (CL REST; not for the public internet)
- Standardized in the Beacon‑APIs repo. After PeerDAS, don’t assume
retrieval endpoints return full payloads without opting into supernode modes (client‑specific). Lock this behind your private network. (github.com)blob
- Builder/Relay APIs (MEV‑Boost ecosystem)
- Follow Flashbots’ Relay/Builder API specs; validate relay set and health; expect increased data exchange under higher blob counts. (github.com)
Emerging utility RPCs to adopt in ops tooling:
(EIP‑695) for robust chain identification. (eips.ethereum.org)eth_chainId
(EIP‑7910, last‑call in 2025) to query current/next fork params including blob schedule—use it in pre‑fork checks to prevent misconfigurations. (eips.ethereum.org)eth_config
7) Concrete 2026 hardware profiles (pick by role)
Baseline assumptions: production Linux, TLC NVMe with DRAM cache, ECC RAM where possible, UPS and monitored power/cooling.
-
Full node + light internal RPC (post‑BPO2 targets)
- CPU: 4–8 cores, strong single‑thread.
- RAM: 32 GB (room for client caches and OS I/O).
- Storage: 4 TB TLC NVMe (EL DB + CL base + 224 GiB blobs + growth).
- Network: 50/15+ Mbps; unmetered preferred. (eips.ethereum.org)
-
Validator box (1–4 validators) with MEV‑Boost
- CPU: 8 cores (latency headroom).
- RAM: 32–64 GB.
- Storage: 4 TB NVMe; separate OS disk recommended for resilience.
- Network: 50/25+ Mbps; dual‑WAN if feasible. (eips.ethereum.org)
-
Local block builder (e.g., research, private order flow)
- CPU: 8c/16t upper‑mid server tier; high ST/MT PassMark as per EIP‑7870.
- RAM: 64 GB.
- Storage: 4 TB NVMe (high IOPS).
- Network: 100/50+ Mbps; low jitter. (eips.ethereum.org)
-
Read‑heavy private RPC (block explorer, indexers)
- Multiple ELs behind a proxy; consider mixed clients (Erigon for historical/range queries, Reth/Geth for head traffic).
- RAM: 64–128 GB across the pool.
- Storage: 2–8 TB NVMe per node depending on prune mode and retention; archive tiers offloaded to specialized nodes. (docs.erigon.tech)
Note: If you co‑host EL+CL on a single box, factor the blob retention budget (160–224 GiB) and leave 20–25% NVMe free space for performance. (docs.teku.consensys.io)
8) Production API patterns we deploy (and why)
- Separate concerns:
- Public JSON‑RPC → Nginx/Envoy with:
- HTTP/2 keepalives, request size caps, rate limiting per IP/key.
- Method allowlisting/denylisting (e.g., block
,debug_*
if not needed).trace_* - Sticky routing for subscription websockets, rolling restarts. (docs.erigon.tech)
- Private Engine API/Beacon API → localhost or VLAN; JWT secret managed by your secret store; rotate on client upgrades. (geth.ethereum.org)
- Public JSON‑RPC → Nginx/Envoy with:
- Prefer finalized/safe tags for financial reporting and risk‑sensitive reads (
,"finalized"
block params), and document latency implications to internal consumers. (ethereum.org)"safe" - Choose the right EL for the job:
- Erigon’s
excels at range/history with pruning control;rpcdaemon - Reth emphasizes high‑throughput with TLC NVMe and compact full nodes;
- Geth is battle‑tested, with mature tooling and snap sync. Mix to match API SLAs. (docs.erigon.tech)
- Erigon’s
9) Hands‑on: minimal, correct CL↔EL wiring in 2026
- Geth EL (Engine API on 8551, JWT at /secrets/jwt.hex):
(geth.ethereum.org)geth --authrpc.addr localhost --authrpc.port 8551 --authrpc.vhosts localhost --authrpc.jwtsecret /secrets/jwt.hex
- Teku CL (pair to local EL; REST off by default externally):
(docs.teku.consensys.net)teku --network=mainnet --ee-endpoint=http://localhost:8551 --ee-jwt-secret-file=/secrets/jwt.hex
- Besu EL equivalent (if you dual‑home or swap ELs):
(besu.hyperledger.org)besu --engine-rpc-port=8551 --engine-host-allowlist=localhost,127.0.0.1 --engine-jwt-secret=/secrets/jwt.hex
Checkpoint sync your CL to cut initial sync time; verify finalized head before starting validators. (docs.nethermind.io)
10) Practical examples: sizing + ops playbooks
-
Example A: “Solo + light RPC” (budget, on‑prem)
- Hardware: Ryzen/E‑core 8c, 32 GB ECC, 4 TB TLC NVMe + 512 GB OS SSD.
- Clients: Geth + Teku, MEV‑Boost to reputable relays; JSON‑RPC behind Caddy with per‑IP 10 RPS.
- Headroom: 1.5 TB free post‑sync; set NVMe SMART alerts at 70% wear; prune quarterly if Geth DB hits 1.1–1.2 TB. (geth.ethereum.org)
-
Example B: “API‑first startup” (cloud, multi‑EL)
- 3× EL nodes (Erigon full, Reth full, Geth snap) behind Envoy; 1× CL “supernode” if you need blob retrieval.
- Storage: 2–4 TB TLC NVMe per EL; 1 TB+ for beacon + blob retention target.
- Practices: method allowlists per tenant, per‑method budgets, WS for
only, and nightlynewHeads
replay tests at “finalized” to validate indexer correctness. (docs.erigon.tech)eth_call
-
Example C: “Enterprise validator fleet”
- Dedicated validator boxes (no public RPC), 64 GB RAM, 4 TB NVMe, dual‑WAN 100/50 Mbps for resilience; remote signer HSMs; DVT as appropriate.
- Monitoring: track gossip peers, attestation inclusion distance; alert if blob gossip throughput drops post‑BPO2. (eips.ethereum.org)
11) Client‑specific footnotes worth knowing in 2026
- Lighthouse v8 and PeerDAS: Post‑Fulu, Beacon APIs won’t serve blobs by default; enable
/--supernode
if you must expose full blob data. Plan for much higher disk and bandwidth than the baseline. (github.com)--semi-supernode - Erigon: Follow storage guidance and avoid remote‑DB
except for special cases; use embedded rpcdaemon on native Linux filesystems for performance. (docs.erigon.tech)rpcdaemon - Reth: Prioritizes fast TLC NVMe; plan ≥1.2 TB for full mainnet and 24+ Mbps steady bandwidth. (reth.rs)
- Nethermind: Keep to 2 TB+ fast SSD if running EL+CL on one host; archive mode needs serious RAM (128 GB). (docs.nethermind.io)
12) Checklist: what to buy and how to configure (2026 edition)
Hardware
- CPU: 4–8c modern with strong ST; 8c/16t if validating or building.
- RAM: 32 GB minimum for production full node; 64 GB if validating or serving heavy RPC.
- Storage: 4 TB TLC NVMe with DRAM; reserve ≥20% free; avoid QLC.
- Network: ≥50/15 Mbps; ≥100/50 Mbps if building blocks. UPS + router with dual‑WAN failover. (eips.ethereum.org)
Software
- EL: pick at least two different clients if you expose public RPC; pin versions that support Fusaka.
- CL: enable REST only on private networks; consider “supernode” mode only if required.
- Engine API: always JWT, localhost/priv‑IP binding; never expose publicly. (besu.hyperledger.org)
API hygiene
- Use
/"finalized"
for accounting/reporting; throttle"safe"
/trace_*
; WS for subscriptions only.debug_* - Enforce method allowlists and per‑tenant rate limits; log and sample long‑tail latencies. (ethereum.org)
Capacity planning
- Add 160–224 GiB to beacon storage for BPO1→BPO2 blob targets; revisit quarterly as parameters evolve. (blog.ethereum.org)
Governance/ops
- Integrate
into your pre‑fork checks to confirm blob schedules and fork readiness across fleets. (eips.ethereum.org)eth_config
13) FAQ for CTOs and platform leads
-
“Can we still get away with 2 TB SSD?”
If you’re just experimenting, yes. For production where downtime hurts, move to 4 TB NVMe: it buys headroom for blob retention, pruning cycles, and growth through 2027. (eips.ethereum.org) -
“How much bandwidth will BPO2 actually cost us?”
Expect higher sustained gossip utilization on CL and longer peaks during sync. Hitting EIP‑7870’s 50/25 Mbps for validators and 100/50 Mbps for local builders is prudent in 2026. (eips.ethereum.org) -
“Can we read blobs via Beacon APIs like before?”
Not by default under PeerDAS; you must opt into supernode modes (client‑specific). Many teams offload full blob access to specialized nodes or providers. (github.com)
14) Bottom line
- Ethereum’s 2026 node footprint is still feasible on commodity servers—but “commodity” now means TLC NVMe, 32–64 GB RAM, and serious bandwidth if you validate or run your own RPC.
- The most impactful protocol change for operators is the blob target increase via BPOs. Prepare for the bandwidth hit and the additional 160–224 GiB of short‑term blob retention on beacon nodes.
- Treat API design as production software: separate public/private endpoints, secure Engine/Beacon APIs, and adopt finalized/safe reads where business‑critical.
If you want a tailored bill of materials and API topology for your use case (e.g., internal data lake, compliance archives, SLA’d RPC), 7Block Labs can produce a right‑sized design and migration plan in under a week—grounded in these 2026 realities.
References
- ethereum.org “Run a node” and JSON‑RPC docs (min/recommended specs; method conventions and “safe/finalized”). (ethereum.org)
- EIP‑7870 (Hardware/Bandwidth recommendations). (eips.ethereum.org)
- Fusaka/BPO schedules (EF blog) and Lighthouse v8 release notes (PeerDAS impact on Beacon APIs). (blog.ethereum.org)
- EIP‑4844 core parameters (blob size, retention) and Teku’s blob storage estimate. (eips.ethereum.org)
- Client docs for sizing and ops: Reth, Erigon, Nethermind, Geth; Engine API JWT/port config (Besu/Geth). (reth.rs)
Like what you're reading? Let's build together.
Get a free 30‑minute consultation with our engineering team.

